Giter Club home page Giter Club logo

radondb-mysql-kubernetes's Introduction

English | 简体中文

banner

Open-source cloud-native database on Kubernetes

A+


What is RadonDB MySQL

RadonDB MySQL is an open-source, cloud-native, and high-availability cluster solution based on MySQL. It adopts the architecture of one leader node and multiple replicas, with management capabilities for security, automatic backups, monitoring and alerting, automatic scaling, and so on.

RadonDB MySQL Kubernetes supports installation, deployment and management of RadonDB MySQL clusters on Kubernetes, KubeSphere and Rancher, and automates tasks involved in running RadonDB MySQL clusters.

Features

🧠 High-availability MySQL: Automatic decentralized leader election, failover within seconds, and strong data consistency in cluster switching

✏️ Cluster management

💻 Monitoring and alerting

✍️ S3 backups and NFS backups

🎈 Log management

👨 Account management

🎨 Others

Architecture

  1. Automatic decentralized leader election by the Raft protocol

  2. Synchronizing data by Semi-Sync replication based on GTID mode

  3. Supporting high-availability through Xenon

banner

Roadmap

Version Features Mode
3.0 Automatic O&M
Multiple node roles
Disaster recovery
SSL transmission encryption
Operator
2.0 Node management
Cluster upgrade
Backup and recovery
Automatic failover
Automatic node rebuilding
Account management (API)
Operator
1.0 Cluster management
Monitoring and alerting
Log management
Account management
Helm

Quick start

👀 This tutorial demonstrates how to deploy a RadonDB MySQL cluster (Operator) on Kubernetes.

Preparation

📦 Prepare a Kubernetes cluster.

Steps

Step 1: Add a Helm repository.

helm repo add radondb https://radondb.github.io/radondb-mysql-kubernetes/

Step 2: Install Operator.

Set the release name to demo and create a Deployment named demo-mysql-operator.

helm install demo radondb/mysql-operator

Notice

This step also creates the CRD required by the cluster.

Step 3: Deploy a RadonDB MySQL Cluster.

Run the following command to create an instance of the mysqlclusters.mysql.radondb.com CRD and thereby create a RadonDB MySQL cluster by using the default parameters. To customize the cluster parameters, see Configuration Parameters.

kubectl apply -f https://github.com/radondb/radondb-mysql-kubernetes/releases/latest/download/mysql_v1alpha1_mysqlcluster.yaml

📖 For more information, see the documentation:

Who are using RadonDB MySQL

License

RadonDB MySQL is based on Apache 2.0 protocol. See License.

Welcome to join us ❤️

😊 Website: https://radondb.com/

😁 Forum: Please join the RadonDB section of kubesphere Developer Forum.

🦉 Community WeChat group: Please add the group assistant radondb to invite you into the group.

For any bugs, questions, or suggestions about RadonDB MySQL, please create an issue on GitHub or feedback on the forum.

Alt

radondb-mysql-kubernetes's People

Contributors

acekingke avatar andyli029 avatar chengbaitai avatar fanwuu avatar hayleyling avatar hustjieke avatar kid-g avatar kukukukiki12138 avatar lydialin2390 avatar mgw2168 avatar molliezhang avatar patrickluoyu avatar runkecheng avatar zhl003 avatar zhyass avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

radondb-mysql-kubernetes's Issues

percona镜像中使用备份恢复的数据后跳过了初始化sql的执行。

Describe the problem
从云端备份的数据库副本恢复出pod之后,发现
image

原因是:每次检查$DATADIR/mysql 是否存在,不存在才进行初始化mysql库与初始化sql的操作,建议将docker-entrypoint-initdb.d 处理逻辑移出$DATADIR/mysql的条件判断中。

if [ ! -d "$DATADIR/mysql" ]; then
  # other code....

  mysql=( mysql --protocol=socket -uroot -hlocalhost --socket="${SOCKET}" --password="" )
  # other code....
  
  echo
  ls /docker-entrypoint-initdb.d/ > /dev/null
  for f in /docker-entrypoint-initdb.d/*; do
  	process_init_file "$f" "${mysql[@]}"
  done

  if ! kill -s TERM "$pid" || ! wait "$pid"; then
  	echo >&2 'MySQL init process failed.'
  	exit 1
  fi

  sed -i '/server-id/d' /etc/mysql/my.cnf
  chown -R mysql:mysql "$DATADIR"
fi

rm -f /var/log/mysql/error.log
rm -f /var/lib/mysql/auto.cnf

uuid=$(cat /proc/sys/kernel/random/uuid)
printf '[auto]\nserver_uuid=%s' $uuid > /var/lib/mysql/auto.cnf

To Reproduce

Expected behavior

Environment:

  • RadonDB MySQL version:

[feature] update kubebuilder v2 to v3

Is your feature request related to a problem? Please describe.

We need use the latest version of kubebuilder v3.1.0

Describe the solution you'd like

None

Describe alternatives you've considered

Additional context

[bug] xenon error log

2021/04/13 02:02:26.712090 api.go:379:      [WARNING]  mysql[localhost:3306].SetSlaveGlobalSysVar[tokudb_fsync_log_period=1000;sync_binlog=1000;innodb_flush_log_at_trx_commit=1]
 2021/04/13 02:02:26.712117 trace.go:37:         [ERROR]    FOLLOWER[ID:kr-test-kryp-0.kr-test-kryp.krypton-deploy.svc.cluster.local:8801, V:0, E:0].mysql.SetSlaveGlobalSysVar.error[Error 1193: Unknown system variable 'tokudb_fsync_log_period']

If initTokudb is false, the cluster will not install tokudb engine, the variable tokudb_fsync_log_period cannot be recognized.

[documentation] remove the step to configure the root password

Is your feature request related to a problem? Please describe.

  • It is not recommended to log in directly with the root account

  • use the root password to log into the root account should uncomment the mysqlRootPassword and set allowEmptyRootPassword to false

Describe the solution you'd like

  • remove the step to configure the root password
  • Recommended to create a super user using xenoncli instruction

Execute the following instructions in the xenon container of the leader node to create a super user:

xenoncli mysql createsuperuser <user> <host> <password> <YES / NO>

The last parameter represents whether to enable ssl.

Describe alternatives you've considered

Added a document introducing xenoncli commands

Additional context

none

make failed when go 1.16.6

Describe the problem
make failed when go 1.16.6

To Reproduce

#make vet
go vet ./...
# github.com/radondb/radondb-mysql-kubernetes/utils
utils/unsafe.go:33:35: possible misuse of reflect.StringHeader
utils/unsafe.go:45:35: possible misuse of reflect.SliceHeader
make: *** [vet] Error 2

Expected behavior

#make vet
go vet ./...

Environment:

  • RadonDB MySQL version:

[feature]Fix Image to radondb/

Is your feature request related to a problem? Please describe.
Fix Image to radondb/

Describe the solution you'd like

Describe alternatives you've considered

Additional context

[bug] 403 FoForbidden to access gcr.azk8s.cn

Describe the problem

Failed to build manager docker image:

[+] Building 3.2s (4/4) FINISHED                                                                                                
 => [internal] load build definition from Dockerfile                                                                       0.0s
 => => transferring dockerfile: 980B                                                                                       0.0s
 => [internal] load .dockerignore                                                                                          0.0s
 => => transferring context: 35B                                                                                           0.0s
 => [internal] load metadata for docker.io/library/golang:1.16                                                             1.2s
 => ERROR [internal] load metadata for gcr.azk8s.cn/distroless/static:nonroot                                              3.1s
------
 > [internal] load metadata for gcr.azk8s.cn/distroless/static:nonroot:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: unexpected status code [manifests nonroot]: 403 Forbidden

see:
https://kubesphere.com.cn/forum/d/1084-gcr-azk8s-cn
https://github.com/GoogleContainerTools/distroless

In Dockerfile, we pull image from gcr.azk8s.cn:

 24 # Use distroless as minimal base image to package the manager binary
 25 # Refer to https://github.com/GoogleContainerTools/distroless for more details
 26 FROM gcr.azk8s.cn/distroless/static:nonroot

To Reproduce

  1. make docker-build

Expected behavior

Environment:

  • RadonDB MySQL version:

[feature] support rolling update

Is your feature request related to a problem? Please describe.

Describe the solution you'd like

Upgrade the follower nodes first, and then upgrade the leader node last.

For a two-node cluster, we need to adjust the parameter rpl_semi_sync_master_timeout before upgrading from the node. Otherwise, the leader node will hang due to semi-sync.

Describe alternatives you've considered

Additional context

Reference: Qingcloud MySQL Plus

user root问题

部署radondb在k8s,默认root无法登录,怎么修改

Leader switch When xenon on slave node quit

Summary

Leader switch When xenon on slave node quit
It is noise:

test-krypton-1: The old leader ->the new slave

2021/04/07 11:39:21.222243 api.go:280: [INFO] mysql.slave.status:&{ 0 false false }

2021/04/07 11:39:21.222424 api.go:290: [INFO] mysql.master.status:&{mysql-bin.000002 154 true true 0 }

2021/04/07 11:39:22.973750 trace.go:37: [ERROR] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:11, E:2].send.heartbeat.to.peer[test-krypton-0.test-krypton.default.svc.cluster.local:8801].new.client.error[dial tcp: lookup test-krypton-0.test-krypton.default.svc.cluster.local: no such host]

2021/04/07 11:39:22.973784 trace.go:37: [ERROR] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:11, E:2].send.heartbeat.get.rsp[N:, V:0, E:0].error[ErrorRpcCall]

2021/04/07 11:39:24.974169 trace.go:37: [ERROR] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:11, E:2].send.heartbeat.to.peer[test-krypton-0.test-krypton.default.svc.cluster.local:8801].new.client.error[dial tcp: lookup test-krypton-0.test-krypton.default.svc.cluster.local: no such host]

2021/04/07 11:39:24.974207 trace.go:37: [ERROR] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:11, E:2].send.heartbeat.get.rsp[N:, V:0, E:0].error[ErrorRpcCall]

2021/04/07 11:39:25.227313 trace.go:32: [WARNING] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:11, E:2].get.voterequest.from[{Raft:{EpochID:2 ViewID:12 Leader: From:test-krypton-0.test-krypton.default.svc.cluster.local:8801 To:test-krypton-1.test-krypton.default.svc.cluster.local:8801 State:} Repl:{Master_Host: Master_Port:0 Repl_User: Repl_Password:} GTID:{Master_Log_File:mysql-bin.000002 Read_Master_Log_Pos:154 Relay_Master_Log_File: Slave_IO_Running:false Slave_IO_Running_Str:No Slave_SQL_Running:true Slave_SQL_Running_Str:Yes Retrieved_GTID_Set: Executed_GTID_Set: Seconds_Behind_Master: Slave_SQL_Running_State:Slave has read all relay log; waiting for more updates Last_Error: Last_IO_Error: Last_SQL_Error:} Peers:[] IdlePeers:[]}]

2021/04/07 11:39:25.227571 api.go:280: [INFO] mysql.slave.status:&{ 0 false false }

2021/04/07 11:39:25.228121 api.go:290: [INFO] mysql.master.status:&{mysql-bin.000002 154 true true 0 }

2021/04/07 11:39:25.228151 api.go:104: [WARNING] mysql.gtid.compare.this[{mysql-bin.000002 154 true true 0 }].from[&{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }]

2021/04/07 11:39:25.228165 trace.go:32: [WARNING] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:11, E:2].get.requestvote.from[N:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].degrade.to.follower

2021/04/07 11:39:25.228172 trace.go:32: [WARNING] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:11, E:2].do.updateViewID[FROM:11 TO:12]

2021/04/07 11:39:25.228177 trace.go:32: [WARNING] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].degrade.to.follower.stop.the.vip...

2021/04/07 11:39:25.334647 trace.go:32: [WARNING] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].leaderStopShellCommand[[-c /scripts/leader-stop.sh]].done

2021/04/07 11:39:25.334685 trace.go:27: [INFO] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].check.semi-sync.thread.stop...

2021/04/07 11:39:25.334692 trace.go:27: [INFO] LEADER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].check.gtid.thread.stop...

2021/04/07 11:39:25.334712 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].leader.state.machine.exit.done

2021/04/07 11:39:25.334719 trace.go:27: [INFO] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].start.CheckBrainSplit

2021/04/07 11:39:25.334739 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].state.init

2021/04/07 11:39:25.510949 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].leaderStopShellCommand[[-c /scripts/leader-stop.sh]].done

2021/04/07 11:39:25.510996 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.waitMysqlDoneAsync.prepare

2021/04/07 11:39:25.511017 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].state.machine.run

2021/04/07 11:39:26.228289 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.SetReadOnly.done

2021/04/07 11:39:26.228474 api.go:376: [ERROR] mysql[localhost:3306].SetSlaveGlobalSysVar.error[Error 1193: Unknown system variable 'tokudb_fsync_log_period'].var[SET GLOBAL tokudb_fsync_log_period=1000]

2021/04/07 11:39:26.228749 api.go:379: [WARNING] mysql[localhost:3306].SetSlaveGlobalSysVar[tokudb_fsync_log_period=1000;sync_binlog=1000;innodb_flush_log_at_trx_commit=1]

2021/04/07 11:39:26.228774 trace.go:37: [ERROR] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.SetSlaveGlobalSysVar.error[Error 1193: Unknown system variable 'tokudb_fsync_log_period']

2021/04/07 11:39:26.228782 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.SetSlaveGlobalSysVar.done

2021/04/07 11:39:26.228789 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].prepareAsync.done

2021/04/07 11:39:26.245368 trace.go:37: [ERROR] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.StartSlave.error[Error 1200: The server is not configured as slave; fix in config file or with CHANGE MASTER TO]

2021/04/07 11:39:26.245396 api.go:280: [INFO] mysql.slave.status:&{ 0 false false }

2021/04/07 11:39:26.245575 api.go:290: [INFO] mysql.master.status:&{mysql-bin.000002 154 true true 0 }

2021/04/07 11:39:26.245593 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].init.my.gtid.is:{mysql-bin.000002 154 true true 0 }

2021/04/07 11:39:26.245619 api.go:280: [INFO] mysql.slave.status:&{ 0 false false }

2021/04/07 11:39:26.308777 api.go:290: [INFO] mysql.master.status:&{mysql-bin.000002 154 true true 0 }

2021/04/07 11:39:26.308839 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].get.heartbeat.my.gtid.is:{mysql-bin.000002 154 true true 0 }

2021/04/07 11:39:26.309177 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].get.heartbeat.from[N:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].change.mysql.master

2021/04/07 11:39:26.345505 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:12, E:2].get.heartbeat.change.to.the.new.master[test-krypton-0.test-krypton.default.svc.cluster.local:8801].successed

test-krypton-0: the old slave ->the new leader

2021/04/07 11:39:20.006492 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:11, E:2].ping.responses[1].is.less.than.half.maybe.brain.split

2021/04/07 11:39:20.009268 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:11, E:2].ping.responses[2].is.greater.than.half.again

2021/04/07 11:39:25.218234 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:11, E:2].timeout.to.do.new.election

2021/04/07 11:39:25.218766 api.go:280: [INFO] mysql.slave.status:&{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }

2021/04/07 11:39:25.218785 api.go:56: [WARNING] mysql[localhost:3306].Promotable.sql_thread[true]

2021/04/07 11:39:25.218793 trace.go:32: [WARNING] FOLLOWER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:11, E:2].timeout.and.ping.almost.node.successed.promote.to.candidate

2021/04/07 11:39:25.219031 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:11, E:2].follower.state.machine.exit

2021/04/07 11:39:25.219056 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:11, E:2].state.machine.run

2021/04/07 11:39:25.219098 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].prepare.send.requestvote.to[test-krypton-2.test-krypton.default.svc.cluster.local:8801]

2021/04/07 11:39:25.219192 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].prepare.send.requestvote.to[test-krypton-1.test-krypton.default.svc.cluster.local:8801]

2021/04/07 11:39:25.219569 api.go:280: [INFO] mysql.slave.status:&{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }

2021/04/07 11:39:25.219621 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].send.requestvote.to.peer[test-krypton-2.test-krypton.default.svc.cluster.local:8801].request.gtid[{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }]

2021/04/07 11:39:25.219706 api.go:280: [INFO] mysql.slave.status:&{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }

2021/04/07 11:39:25.219726 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].send.requestvote.to.peer[test-krypton-1.test-krypton.default.svc.cluster.local:8801].request.gtid[{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }]

2021/04/07 11:39:25.226070 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].send.requestvote.done.to[test-krypton-2.test-krypton.default.svc.cluster.local:8801]

2021/04/07 11:39:25.332622 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].send.requestvote.done.to[test-krypton-1.test-krypton.default.svc.cluster.local:8801]

2021/04/07 11:39:25.332677 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].get.vote.response.from[N:test-krypton-2.test-krypton.default.svc.cluster.local:8801, R:FOLLOWER].rsp.gtid[{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }].retcode[OK]

2021/04/07 11:39:25.332690 trace.go:27: [INFO] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].get.vote.response.from[N:test-krypton-2.test-krypton.default.svc.cluster.local:8801, V:11].ok.votegranted[2].majoyrity[2]

2021/04/07 11:39:25.332705 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].get.vote.response.from[N:test-krypton-1.test-krypton.default.svc.cluster.local:8801, R:LEADER].rsp.gtid[{mysql-bin.000002 154 true true 0 }].retcode[OK]

2021/04/07 11:39:25.332725 trace.go:27: [INFO] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].get.vote.response.from[N:test-krypton-1.test-krypton.default.svc.cluster.local:8801, V:11].ok.votegranted[3].majoyrity[2]

2021/04/07 11:39:25.332733 trace.go:32: [WARNING] CANDIDATE[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].grants.unanimous.votes[3]/members[3].become.leader

2021/04/07 11:39:25.332746 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].candidate.state.machine.exit

2021/04/07 11:39:25.332755 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].state.init

2021/04/07 11:39:25.332769 trace.go:27: [INFO] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].purge.binlog.start[300000ms]...

2021/04/07 11:39:25.332779 trace.go:27: [INFO] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].check.semi-sync.thread.start[5000ms]...

2021/04/07 11:39:25.332787 trace.go:27: [INFO] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].check.gtid.thread.start[5000ms]...

2021/04/07 11:39:25.332794 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].async.setting.prepare....

2021/04/07 11:39:25.332829 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].state.machine.run

2021/04/07 11:39:25.334038 api.go:280: [INFO] mysql.slave.status:&{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }

2021/04/07 11:39:25.334064 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].my.gtid.is:{mysql-bin.000002 154 false No true Yes Slave has read all relay log; waiting for more updates }

2021/04/07 11:39:25.334074 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].1. mysql.WaitUntilAfterGTID.prepare

2021/04/07 11:39:25.392320 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.WaitUntilAfterGTID.done

2021/04/07 11:39:25.392348 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].2. mysql.ChangeToMaster.prepare

2021/04/07 11:39:25.412773 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.ChangeToMaster.done

2021/04/07 11:39:25.412829 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].3. mysql.EnableSemiSyncMaster.prepare

2021/04/07 11:39:25.413856 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.EnableSemiSyncMaster.done

2021/04/07 11:39:25.413868 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].4.mysql.SetSysVars.prepare

2021/04/07 11:39:25.414082 api.go:356: [ERROR] mysql[localhost:3306].SetMasterGlobalSysVar.error[Error 1193: Unknown system variable 'tokudb_fsync_log_period'].var[SET GLOBAL tokudb_fsync_log_period=default]

2021/04/07 11:39:25.414338 api.go:359: [WARNING] mysql[localhost:3306].SetMasterGlobalSysVar[tokudb_fsync_log_period=default;sync_binlog=default;innodb_flush_log_at_trx_commit=default]

2021/04/07 11:39:25.414348 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.SetSysVars.done

2021/04/07 11:39:25.414354 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].5. mysql.SetReadWrite.prepare

2021/04/07 11:39:25.414572 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].mysql.SetReadWrite.done

2021/04/07 11:39:25.414585 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].6. start.vip.prepare

2021/04/07 11:39:25.531878 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].leaderStartShellCommand[[-c /scripts/leader-start.sh]].done

2021/04/07 11:39:25.531905 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].start.vip.done

2021/04/07 11:39:25.531920 trace.go:32: [WARNING] LEADER[ID:test-krypton-0.test-krypton.default.svc.cluster.local:8801, V:12, E:2].async.setting.all.done....

2021/04/07 11:39:30.333380 api.go:280: [INFO] mysql.slave.status:&{ 0 false false }

2021/04/07 11:39:30.333841 api.go:290: [INFO] mysql.master.status:&{mysql-bin.000002 154 true true 0 }

TEST Issue template

Describe the problem

To Reproduce

Expected behavior

Environment:

  • Kubernetes version [e.g. v1.19.5]
  • Helm version [e.g. v3.4.2]
  • RadonDB MySQL version [e.g. https://github.com/radondb/radondb-mysql-kubernetes/commit/56896cd119d9a6017e7f15f1dc0b37b83a720278]

[bug] innodb_buffer_pool_size cannot be set correctly when its size greater than int32

Describe the problem
innodb_buffer_pool_size is one of MysqlConf`s keys.

type MysqlConf map[string]intstr.IntOrString

The type of innodbBufferPoolSize is int32.

type IntOrString struct {
	Type   Type   `protobuf:"varint,1,opt,name=type,casttype=Type"`
	IntVal int32  `protobuf:"varint,2,opt,name=intVal"`
	StrVal string `protobuf:"bytes,3,opt,name=strVal"`
}

func (c *Cluster) EnsureMysqlConf() {
	...
	c.Spec.MysqlOpts.MysqlConf["innodb_buffer_pool_size"] = intstr.FromInt(int(innodbBufferPoolSize))
	c.Spec.MysqlOpts.MysqlConf["innodb_buffer_pool_instances"] = intstr.FromInt(int(instances))
}
func FromInt(val int) IntOrString {
	if val > math.MaxInt32 || val < math.MinInt32 {
		klog.Errorf("value: %d overflows int32\n%s\n", val, debug.Stack())
	}
	return IntOrString{Type: Int, IntVal: int32(val)}
}

In fact, innodbBufferPoolSize is likely to overflows int32.
To Reproduce

Expected behavior

Use other data types to store InnodBufferPoolsize or change the unit of memory.

Environment:

  • RadonDB MySQL version: operator

[documentation] Add table content for each file

add table content:

# **在 Kubesphere 上部署 RadonDB MySQL 集群**

## **简介**

RadonDB MySQL 是基于 MySQL 的开源、高可用、云原生集群解决方案。通过使用 Raft 协议,RadonDB MySQL 可以快速进行故障转移,且不会丢失任何事务。

## **部署准备**

### **安装 KubeSphere**

可选择如下安装方式:
...

changed to

   * [<strong>在 Kubesphere 上部署 RadonDB MySQL 集群</strong>](#在-kubesphere-上部署-radondb-mysql-集群) 
      * [<strong>简介</strong>](#简介) 
      * [<strong>部署准备</strong>](#部署准备) 
         * [<strong>安装 KubeSphere</strong>](#安装-kubesphere)
# **在 Kubesphere 上部署 RadonDB MySQL 集群**

## **简介**

RadonDB MySQL 是基于 MySQL 的开源、高可用、云原生集群解决方案。通过使用 Raft 协议,RadonDB MySQL 可以快速进行故障转移,且不会丢失任何事务。

## **部署准备**

### **安装 KubeSphere**

可选择如下安装方式:

[enhancement] add PreStop for xenon container

Is your feature request related to a problem? Please describe.

Describe the solution you'd like

for example:
pod0,pod1, pod2,

if pod2 stop, before xenon container stop, run xenoncli cluster remove pod2:8801 in pod0 and pod1.

Describe alternatives you've considered

Additional context

[feature] add status api to support update the cluster status

Is your feature request related to a problem? Please describe.

Add status api to support update the cluster status

Describe the solution you'd like

The cluster status can be updated in time.

Describe alternatives you've considered

Additional context

[feature] support 1 replica

When replicaCount is 1:
1. Remove krypton container.
2. Set global read_only=off.
3. The label 'role' is master.
4. Remove slave service.

[documentation] fix some links not consistent with install manual title

- [Kubernetes 平台部署](docs/Kubernetes/deploy_xenondb_on_kubernetes.md)
- [KubeSphere 应用商店部署](docs/KubeSphere/deploy_xenondb_on_kubesphere.md)

在 Kubenetes 在 Kubernetes 上部署 XenonDB 集群
在 Kubesphere 上部署 XenonDB 集群

Modify the description of the two links to be consistent with the title of the original text

[enhancement] add pull request and issue templates

Is your feature request related to a problem? Please describe.

I want to customize and standardize the information I'd like contributors to include when they open issues and pull requests in radondb-mysql-kubernetes.

Describe the solution you'd like

Refer to the github article, add pull request and issue templates for radondb-mysql-kubernetes.

Describe alternatives you've considered

N/A

Additional context

N/A

Leader is read_only

Summary

mysql> show variables like '%read_only%';
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| innodb_read_only      | OFF   |
| read_only             | ON    |
| super_read_only       | OFF   |
| transaction_read_only | OFF   |
| tx_read_only          | OFF   |
+-----------------------+-------+

[bug] Slave failed to initialize relay log info structure from the repository

describe:
Install the cluster by using the existing pvc, the slave has the error Slave failed to initialize relay log info structure from the repository.

process:

# use pvc
helm install test
# after success, do CRUD on MySQL. Then uninstall the cluster and retain the pvc.
helm delete test
# recreate test
helm install test

reason:
The binlog and relay log aren't mounted.

[enhancement] modify the xenon container post start logic

Is your feature request related to a problem? Please describe.

Describe the solution you'd like

xenon-0,xenon-1,xenon-2

Must be create not update.

  1. xenon-0 start, run:
xenoncli raft trytoleader
  1. xenon-1 start, run:
for xenon-1:
xenoncli cluster add xenon-0

for xenon-0:
xenoncli cluster add xenon-1
  1. xenon-2 start, run:
for xenon-2:
xenoncli cluster add xenon-0,xenon-1

for xenon-0:
xenoncli cluster add xenon-2

for xenon-1:
xenoncli cluster add xenon-2

Describe alternatives you've considered

Additional context

[feature]add xtrabackup manger

Is your feature request related to a problem? Please describe.

we need xtrabackup to cloud for the cluster
Describe the solution you'd like
Make a backup like this:

kubectl apply -f mysql_v1alpha1_backup.yaml 

To backup the database data to S3 storage , or Persisten volume
Then could use the backup copy to create cluster , add RestoreFrom feild in mysql_v1alpha1_cluster.yaml

kubectl apply -f mysql_v1alpha1_cluster.yaml 

create the cluster from backup copy

Describe alternatives you've considered

Additional context

Crush

Readiness probe failed: runtime/cgo: pthread_create failed: Resource temporarily unavailable
SIGABRT: abort
runtime/cgo: pthread_create failed: Resource temporarily unavailable
SIGABRT: abort
PC=0x7f01c023b18b m=0 sigcode=18446744073709551610

goroutine 0 [idle]:
runtime: unknown pc 0x7f01c023b18b
stack: frame={sp:0x7fff3a945c00, fp:0x0} stack=[0x7fff3a147278,0x7fff3a9462b0)
00007fff3a945b00:  00007f01c043d190  00007f01c043d4f8 
00007fff3a945b10:  0000000000000000  00007fff3a945b64 
00007fff3a945b20:  00007fff00000000  00007fff00000001 
00007fff3a945b30:  0000000000679694 <crypto/tls.(*serverHelloMsg).marshal.func1.2.4.1+116>  00007fff3a945c20 
00007fff3a945b40:  00007f01c01ff860  00007f01c040a500 
00007fff3a945b50:  0000000000000004  0000000000000004 
00007fff3a945b60:  000000000022be0e  0000000000000000 
00007fff3a945b70:  0000000000000000  000000008ff9ac08 
00007fff3a945b80:  00007f01c043d4f8  0000000000c55078 
00007fff3a945b90:  00000000009a111a  0000000002c04c10 
00007fff3a945ba0:  0000000000000000  000000000098bd64 
00007fff3a945bb0:  0000000000000000  00007f01c041f187 
00007fff3a945bc0:  0000000000000005  0000000000000000 
00007fff3a945bd0:  0000000000000001  00007f01c01ff860 
00007fff3a945be0:  00007fff3a945e30  00007f01c0426aa7 
00007fff3a945bf0:  000000000000000a  00007f01c030621f 
00007fff3a945c00: <0000000000000000  00007f01c03e1643 
00007fff3a945c10:  00007f01c03e34b0  000000000000000a 
00007fff3a945c20:  0000000000000037  0000000000a28e40 
00007fff3a945c30:  000000000000037f  0000000000000000 
00007fff3a945c40:  0000000000000000  0000ffff00001fa0 
00007fff3a945c50:  0000000000000000  0000000000000000 
00007fff3a945c60:  0000000000000000  0000000000000000 
00007fff3a945c70:  0000000000000000  0000000000000000 
00007fff3a945c80:  fffffffe7fffffff  ffffffffffffffff 
00007fff3a945c90:  ffffffffffffffff  ffffffffffffffff 
00007fff3a945ca0:  ffffffffffffffff  ffffffffffffffff 
00007fff3a945cb0:  ffffffffffffffff  ffffffffffffffff 
00007fff3a945cc0:  ffffffffffffffff  ffffffffffffffff 
00007fff3a945cd0:  ffffffffffffffff  ffffffffffffffff 
00007fff3a945ce0:  ffffffffffffffff  ffffffffffffffff 
00007fff3a945cf0:  ffffffffffffffff  ffffffffffffffff 
runtime: unknown pc 0x7f01c023b18b
stack: frame={sp:0x7fff3a945c00, fp:0x0} stack=[0x7fff3a147278,0x7fff3a9462b0)
00007fff3a945b00:  00007f01c043d190  00007f01c043d4f8 
00007fff3a945b10:  0000000000000000  00007fff3a945b64 
00007fff3a945b20:  00007fff00000000  00007fff00000001 
00007fff3a945b30:  0000000000679694 <crypto/tls.(*serverHelloMsg).marshal.func1.2.4.1+116>  00007fff3a945c20 
00007fff3a945b40:  00007f01c01ff860  00007f01c040a500 
00007fff3a945b50:  0000000000000004  0000000000000004 
00007fff3a945b60:  000000000022be0e  0000000000000000 
00007fff3a945b70:  0000000000000000  000000008ff9ac08 
00007fff3a945b80:  00007f01c043d4f8  0000000000c55078 
00007fff3a945b90:  00000000009a111a  0000000002c04c10 
00007fff3a945ba0:  0000000000000000  000000000098bd64 
00007fff3a945bb0:  0000000000000000  00007f01c041f187 
00007fff3a945bc0:  0000000000000005  0000000000000000 
00007fff3a945bd0:  0000000000000001  00007f01c01ff860 
00007fff3a945be0:  00007fff3a945e30  00007f01c0426aa7 
00007fff3a945bf0:  000000000000000a  00007f01c030621f 
00007fff3a945c00: <0000000000000000  00007f01c03e1643 
00007fff3a945c10:  00007f01c03e34b0  000000000000000a 
00007fff3a945c20:  0000000000000037  0000000000a28e40 
00007fff3a945c30:  000000000000037f  0000000000000000 
00007fff3a945c40:  0000000000000000  0000ffff00001fa0 
00007fff3a945c50:  0000000000000000  0000000000000000 
00007fff3a945c60:  0000000000000000  0000000000000000 
00007fff3a945c70:  0000000000000000  0000000000000000 
00007fff3a945c80:  fffffffe7fffffff  ffffffffffffffff 
00007fff3a945c90:  ffffffffffffffff  ffffffffffffffff 
00007fff3a945ca0:  ffffffffffffffff  ffffffffffffffff 
00007fff3a945cb0:  ffffffffffffffff  ffffffffffffffff 
00007fff3a945cc0:  ffffffffffffffff  ffffffffffffffff 
00007fff3a945cd0:  ffffffffffffffff  ffffffffffffffff 
00007fff3a945ce0:  ffffffffffffffff  ffffffffffffffff 
00007fff3a945cf0:  ffffffffffffffff  ffffffffffffffff 

goroutine 1 [runnable, locked to thread]:
text/template/parse.(*lexer).nextItem(...)
  /usr/local/go/src/text/template/parse/lex.go:195
text/template/parse.(*Tree).next(...)
  /usr/local/go/src/text/template/parse/parse.go:64
text/template/parse.(*Tree).peekNonSpace(0xc000106100, 0x0, 0x0, 0x0, 0x0, 0x0)
  /usr/local/go/src/text/template/parse/parse.go:113 +0x154
text/template/parse.(*Tree).itemList(0xc000106100, 0x8f65e4, 0x5, 0xc00006e0c0)
  /usr/local/go/src/text/template/parse/parse.go:326 +0xbe
text/template/parse.(*Tree).parseControl(0xc000106100, 0x915700, 0x8f65e4, 0x5, 0x0, 0x0, 0xc00006e0c0, 0x0, 0x0)
  /usr/local/go/src/text/template/parse/parse.go:459 +0x100
text/template/parse.(*Tree).rangeControl(0xc000106100, 0x1d, 0x9c)
  /usr/local/go/src/text/template/parse/parse.go:501 +0x4c
text/template/parse.(*Tree).action(0xc000106100, 0xa, 0x9a)
  /usr/local/go/src/text/template/parse/parse.go:368 +0x4d7
text/template/parse.(*Tree).textOrAction(0xc000106100, 0xa, 0x9a)
  /usr/local/go/src/text/template/parse/parse.go:345 +0x293
text/template/parse.(*Tree).itemList(0xc000106100, 0x8f65e4, 0x5, 0xc00006e060)
  /usr/local/go/src/text/template/parse/parse.go:327 +0xf9
text/template/parse.(*Tree).parseControl(0xc000106100, 0x0, 0x8f65e4, 0x5, 0x0, 0x0, 0xc00006e060, 0x0, 0x0)
  /usr/local/go/src/text/template/parse/parse.go:459 +0x100
text/template/parse.(*Tree).rangeControl(0xc000106100, 0x1d, 0x2b)
  /usr/local/go/src/text/template/parse/parse.go:501 +0x4c
text/template/parse.(*Tree).action(0xc000106100, 0xa, 0x29)
  /usr/local/go/src/text/template/parse/parse.go:368 +0x4d7
text/template/parse.(*Tree).textOrAction(0xc000106100, 0x1d, 0x2b)
  /usr/local/go/src/text/template/parse/parse.go:345 +0x293
text/template/parse.(*Tree).parse(0xc000106100)
  /usr/local/go/src/text/template/parse/parse.go:291 +0x381
text/template/parse.(*Tree).Parse(0xc000106100, 0x91573d, 0x172, 0x0, 0x0, 0x0, 0x0, 0xc000093a70, 0xc000091d40, 0x2, ...)
  /usr/local/go/src/text/template/parse/parse.go:230 +0x247
text/template/parse.Parse(0x8f95bb, 0x9, 0x91573d, 0x172, 0x0, 0x0, 0x0, 0x0, 0xc000091d40, 0x2, ...)
  /usr/local/go/src/text/template/parse/parse.go:55 +0x122
text/template.(*Template).Parse(0xc0000cf640, 0x91573d, 0x172, 0xc0000b2140, 0xc0000cf680, 0xc000093860)
  /usr/local/go/src/text/template/template.go:200 +0x111
html/template.(*Template).Parse(0xc000093a40, 0x91573d, 0x172, 0x0, 0x0, 0x0)
  /usr/local/go/src/html/template/template.go:189 +0x99
net/rpc.init()
  /usr/local/go/src/net/rpc/debug.go:39 +0x9b

[Bug]Fix the error about lint

Summary

# helm lint ./charts/
==> Linting ./charts/
[ERROR] Chart.yaml: version 'beta0.1.0' is not a valid SemVer
[INFO] Chart.yaml: icon is recommended
[ERROR] Chart.yaml: chart type is not valid in apiVersion 'v1'. It is valid in apiVersion 'v2'

Error: 1 chart(s) linted, 1 chart(s) failed

execute sql with no response

Description of the problem:
After connecting to the master node, the execution of the sql is not responsive(such as creating table), forced exit MySQL re-entry found that the sql has been successfully executed at the master node, but slave node is not synchronized.

How to deploy:
Upload the template to the KubeSphere console deployment.

Deployment parameters:
Default.

Other configurations:
The project name is krypton-deploy and the release name is kryptondb.

[feature] add unit test

Is your feature request related to a problem? Please describe.

Describe the solution you'd like

testify + mock framework(monkey/gomock/gostub)

Describe alternatives you've considered

Additional context

[feature] add publishNotReadyAddresses param in headless service

We use publishNotReadyAddresses to be able to access pods even if the pod is not ready.
When set to true, indicates that DNS implementations must publish the notReadyAddresses of subsets for the Endpoints associated with the Service.

Before:

/ $ xenoncli cluster status
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
|                    ID                    |             Raft              | Mysqld | Monitor |          Backup          |        Mysql        | IO/SQL_RUNNING |                 MyLeader                 |
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
| test-krypton-1.test-krypton.default.svc.cluster.local:8801 | [ViewID:3 EpochID:2]@LEADER   | UNKNOW | OFF     | state:[NONE]␤            | [ALIVE] [READWRITE] | [true/true]    | test-krypton-1.test-krypton.default.svc.cluster.local:8801 |
|                                          |                               |        |         | LastError:               |                     |                |                                          |
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
| test-krypton-0.test-krypton.default.svc.cluster.local:8801 | [ViewID:3 EpochID:2]@FOLLOWER | UNKNOW | OFF     | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | test-krypton-1.test-krypton.default.svc.cluster.local:8801 |
|                                          |                               |        |         | LastError:               |                     |                |                                          |
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
| test-krypton-2.test-krypton.default.svc.cluster.local:8801 | [ViewID:3 EpochID:2]@FOLLOWER | UNKNOW | OFF     | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | test-krypton-1.test-krypton.default.svc.cluster.local:8801 |
|                                          |                               |        |         | LastError:               |                     |                |                                          |
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+

After:

/ $ xenoncli cluster status
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
|                    ID                    |             Raft              | Mysqld | Monitor |          Backup          |        Mysql        | IO/SQL_RUNNING |                 MyLeader                 |
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
| test-krypton-1.test-krypton.default:8801 | [ViewID:3 EpochID:2]@LEADER   | UNKNOW | OFF     | state:[NONE]␤            | [ALIVE] [READWRITE] | [true/true]    | test-krypton-1.test-krypton.default:8801 |
|                                          |                               |        |         | LastError:               |                     |                |                                          |
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
| test-krypton-0.test-krypton.default:8801 | [ViewID:3 EpochID:2]@FOLLOWER | UNKNOW | OFF     | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | test-krypton-1.test-krypton.default:8801 |
|                                          |                               |        |         | LastError:               |                     |                |                                          |
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+
| test-krypton-2.test-krypton.default:8801 | [ViewID:3 EpochID:2]@FOLLOWER | UNKNOW | OFF     | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | test-krypton-1.test-krypton.default:8801 |
|                                          |                               |        |         | LastError:               |                     |                |                                          |
+------------------------------------------+-------------------------------+--------+---------+--------------------------+---------------------+----------------+------------------------------------------+

[enhancement] Modify the key word

Summary

master->leader
slave->follower

tt-krypton-master           NodePort    10.96.42.194    <none>        3306:32370/TCP      102m
tt-krypton-metrics          ClusterIP   None            <none>        9104/TCP            102m
tt-krypton-slave            NodePort    10.96.21.243    <none>        3306:30545/TCP      102m

[feature] add workflow and Travis CI

Is your feature request related to a problem? Please describe.
Add workflow
Add Travis CI

Describe the solution you'd like

Describe alternatives you've considered

Additional context

[feature] merge mysql operator code to the main branch

Is your feature request related to a problem? Please describe.

Update the cluster api, add status api, add sidecar container, update the Dockerfile Makefile...

Describe the solution you'd like

Describe alternatives you've considered

Additional context

请问radondb需要怎么执行需要初始化的sql文件?

General Question

我尝试了按照原生mysql镜像的方法,修改了chart中statefulset.yaml文件,把configmap挂载到了mysql容器的/docker-entrypoint-initdb.d目录,但是没有生效。于是我对比了一下mysql和xenondb/percona:5.7.33的docker-entrypoint.sh,发现相关逻辑有一些差异:

mysql:5.7:
https://github.com/docker-library/mysql/blob/master/5.7/docker-entrypoint.sh

# usage: docker_process_init_files [file [file [...]]]
#    ie: docker_process_init_files /always-initdb.d/*
# process initializer files, based on file extensions
docker_process_init_files() {
	# mysql here for backwards compatibility "${mysql[@]}"
	mysql=( docker_process_sql )

	echo
	local f
	for f; do
		case "$f" in
			*.sh)
				# https://github.com/docker-library/postgres/issues/450#issuecomment-393167936
				# https://github.com/docker-library/postgres/pull/452
				if [ -x "$f" ]; then
					mysql_note "$0: running $f"
					"$f"
				else
					mysql_note "$0: sourcing $f"
					. "$f"
				fi
				;;
			*.sql)    mysql_note "$0: running $f"; docker_process_sql < "$f"; echo ;;
			*.sql.gz) mysql_note "$0: running $f"; gunzip -c "$f" | docker_process_sql; echo ;;
			*.sql.xz) mysql_note "$0: running $f"; xzcat "$f" | docker_process_sql; echo ;;
			*)        mysql_warn "$0: ignoring $f" ;;
		esac
		echo
	done
}

xenondb/percona:5.7.33:
https://github.com/radondb/radondb-mysql-kubernetes/blob/main/charts/helm/dockerfiles/mysql/mysql-entry.sh

# usage: process_init_file FILENAME MYSQLCOMMAND...
#       ie: process_init_file foo.sh mysql -uroot
# (process a single initializer file, based on its extension. we define this
# function here, so that initializer scripts (*.sh) can use the same logic,
# potentially recursively, or override the logic used in subsequent calls)
process_init_file() {
        local f="$1"; shift
        local mysql=( "$@" )

        case "$f" in
                *.sh)    echo "$0: running $f"; . "$f" ;;
                *.sql)  echo "$0: running $f"; "${mysql[@]}" < "$f"; echo ;;
                *.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${mysql[@]}"; echo ;;
                *)              echo "$0: ignoring $f" ;;
        esac
        echo
}

我想请教一下这些改动的目的是什么,以及radondb-mysql需要怎么初始化sql文件?

Pod role label cannot be switched correctly after kill leader pod

Operation recurrence

The old leader is test-xenondb-0, the old follower is test-xenondb-1 and test-xenondb-2.

step 1

executes the script in leader node.

sysbench --db-driver=mysql --mysql-user=qingcloud --mysql-password=Qing@123 --mysql-host=<host> --mysql-port=<port> --mysql-db=qingcloud --range_size=100 --table_size=100000 --tables=4 --threads=128 --events=0 --time=3600 --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua run

step 2

delete the leader pod.

kubectl delete pod <leader-pod-name>

Then the leader pod automatically restarts and re-elects the leader after the restart is complete. but the label of the new leader pod is not updated.

kind: Pod
apiVersion: v1
metadata:
  name: test-xenondb-2
  generateName: test-xenondb-
  namespace: xenondb-deploy
  labels:
    app: test-xenondb
    controller-revision-hash: test-xenondb-5c8949f646
    release: test
    role: follower
    statefulset.kubernetes.io/pod-name: test-xenondb-2

Log

The IO status of the new leader node is false.

/ $ xenoncli cluster status
+-------------------------------------------------+--------------------------------+--------+---------+--------------------------+--------------------+----------------+-------------------------------------------------+
|                       ID                        |              Raft              | Mysqld | Monitor |          Backup          |       Mysql        | IO/SQL_RUNNING |                    MyLeader                     |
+-------------------------------------------------+--------------------------------+--------+---------+--------------------------+--------------------+----------------+-------------------------------------------------+
| test-xenondb-2.test-xenondb.xenondb-deploy:8801 | [ViewID:32 EpochID:2]@LEADER   | UNKNOW | OFF     | state:[NONE]␤            | [ALIVE] [READONLY] | [false/true]   | test-xenondb-2.test-xenondb.xenondb-deploy:8801 |
|                                                 |                                |        |         | LastError:               |                    |        |                                                 |
+-------------------------------------------------+--------------------------------+--------+---------+--------------------------+--------------------+----------------+-------------------------------------------------+
| test-xenondb-0.test-xenondb.xenondb-deploy:8801 | [ViewID:32 EpochID:2]@FOLLOWER | UNKNOW | OFF     | state:[NONE]␤            | [ALIVE] [READONLY] | [true/true]    | test-xenondb-2.test-xenondb.xenondb-deploy:8801 |
|                                                 |                                |        |         | LastError:               |                    |        |                                                 |
+-------------------------------------------------+--------------------------------+--------+---------+--------------------------+--------------------+----------------+-------------------------------------------------+
| test-xenondb-1.test-xenondb.xenondb-deploy:8801 | [ViewID:32 EpochID:2]@FOLLOWER | UNKNOW | OFF     | state:[NONE]␤            | [ALIVE] [READONLY] | [true/true]    | test-xenondb-2.test-xenondb.xenondb-deploy:8801 |
|                                                 |                                |        |         | LastError:               |                    |        |                                                 |
+-------------------------------------------------+--------------------------------+--------+---------+--------------------------+--------------------+----------------+-------------------------------------------------+
/ $ xenoncli cluster gtid
+-------------------------------------------------+----------+-------+--------------------------------------------------+------------------------------------------------------+
|                       ID                        |   Raft   | Mysql |                Executed_GTID_Set                 |                  Retrieved_GTID_Set                |
+-------------------------------------------------+----------+-------+--------------------------------------------------+------------------------------------------------------+
| test-xenondb-0.test-xenondb.xenondb-deploy:8801 | FOLLOWER | ALIVE | 17f2fb8e-f6af-4da1-b25a-09e1639d81e3:1-1079443,␤ |                |
|                                                 |          |       | 644c2d77-6b63-4843-a1ca-6d8bb45e448c:1-165432,␤  |                |
|                                                 |          |       | bc25fa13-b14a-4946-ac41-e4b2cec149e1:1-133170,␤  |                |
|                                                 |          |       | f3177eaf-2b0b-42cf-b6c0-7207a6e66ad4:1-159277    |                |
+-------------------------------------------------+----------+-------+--------------------------------------------------+------------------------------------------------------+
| test-xenondb-1.test-xenondb.xenondb-deploy:8801 | FOLLOWER | ALIVE | 17f2fb8e-f6af-4da1-b25a-09e1639d81e3:1-341721,␤  | 17f2fb8e-f6af-4da1-b25a-09e1639d81e3:255336-348183   |
|                                                 |          |       | 644c2d77-6b63-4843-a1ca-6d8bb45e448c:1-165432,␤  |                |
|                                                 |          |       | bc25fa13-b14a-4946-ac41-e4b2cec149e1:1-133170,␤  |                |
|                                                 |          |       | f3177eaf-2b0b-42cf-b6c0-7207a6e66ad4:1-159405    |                |
+-------------------------------------------------+----------+-------+--------------------------------------------------+------------------------------------------------------+
| test-xenondb-2.test-xenondb.xenondb-deploy:8801 | LEADER   | ALIVE | 17f2fb8e-f6af-4da1-b25a-09e1639d81e3:1-348183,␤  | 17f2fb8e-f6af-4da1-b25a-09e1639d81e3:1-1079443,␤     |
|                                                 |          |       | 644c2d77-6b63-4843-a1ca-6d8bb45e448c:1-165432,␤  | 644c2d77-6b63-4843-a1ca-6d8bb45e448c:141788-165432,␤ |
|                                                 |          |       | bc25fa13-b14a-4946-ac41-e4b2cec149e1:1-133170,␤  | bc25fa13-b14a-4946-ac41-e4b2cec149e1:84271-133170,␤  |
|                                                 |          |       | f3177eaf-2b0b-42cf-b6c0-7207a6e66ad4:1-159277    | f3177eaf-2b0b-42cf-b6c0-7207a6e66ad4:1-159277        |
+-------------------------------------------------+----------+-------+--------------------------------------------------+------------------------------------------------------+
mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State:
                  Master_Host: test-xenondb-0.test-xenondb.xenondb-deploy
                  Master_User: qc_repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000006
          Read_Master_Log_Pos: 173547413
               Relay_Log_File: mysql-relay-bin.000004
                Relay_Log_Pos: 838635253
        Relay_Master_Log_File: mysql-bin.000004
             Slave_IO_Running: No
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 978983899
              Relay_Log_Space: 2180829427
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 1148
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 2003
                Last_IO_Error: error reconnecting to master '[email protected]:3306' - retry-time: 60  retries: 1
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 100
                  Master_UUID: 17f2fb8e-f6af-4da1-b25a-09e1639d81e3
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: System lock
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp: 210425 10:05:06
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set: 17f2fb8e-f6af-4da1-b25a-09e1639d81e3:1-1079443,
644c2d77-6b63-4843-a1ca-6d8bb45e448c:141788-165432,
bc25fa13-b14a-4946-ac41-e4b2cec149e1:84271-133170,
f3177eaf-2b0b-42cf-b6c0-7207a6e66ad4:1-159277
            Executed_Gtid_Set: 17f2fb8e-f6af-4da1-b25a-09e1639d81e3:1-288523,
644c2d77-6b63-4843-a1ca-6d8bb45e448c:1-165432,
bc25fa13-b14a-4946-ac41-e4b2cec149e1:1-133170,
f3177eaf-2b0b-42cf-b6c0-7207a6e66ad4:1-159277
                Auto_Position: 1
         Replicate_Rewrite_DB:
                 Channel_Name:
           Master_TLS_Version:

The mysql log of test-xenondb-2.

2021-04-25T10:05:06.019869+08:00 157 [Note] Slave for channel '': received end packet from server due to dump thread being killed on master. Dump threads are killed for example during master shutdown, explicitly by a user, or when the master receives a binlog send request from a duplicate server UUID : Error

2021-04-25T10:05:06.020011+08:00 157 [Note] Slave I/O thread: Failed reading log event, reconnecting to retry, log 'mysql-bin.000006' at position 173547413 for channel ''

2021-04-25T10:05:06.020051+08:00 157 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.

2021-04-25T10:05:06.021838+08:00 157 [ERROR] Slave I/O for channel '': error reconnecting to master '[email protected]:3306' - retry-time: 60 retries: 1, Error_code: 2003

2021-04-25T10:05:06.974288+08:00 157 [Note] Slave I/O thread killed during or after a reconnect done to recover from failed read

2021-04-25T10:05:06.974326+08:00 157 [Note] Slave I/O thread exiting for channel '', read up to log 'mysql-bin.000006', position 173547413

2021-04-25T10:05:12.311606+08:00 58170 [Note] Start binlog_dump to master_thread_id(58170) slave_server(101), pos(, 4)

2021-04-25T10:05:12.311651+08:00 58170 [Note] Start semi-sync binlog_dump to slave (server_id: 101), pos(, 4)

2021-04-25T10:05:16.989469+08:00 32 [Note] Semi-sync replication initialized for transactions.

2021-04-25T10:05:16.989546+08:00 32 [Note] Semi-sync replication enabled on the master.

2021-04-25T10:05:16.989841+08:00 0 [Note] Starting ack receiver thread

2021-04-25T10:06:10.012709+08:00 58194 [Note] Start binlog_dump to master_thread_id(58194) slave_server(100), pos(, 4)

2021-04-25T10:06:10.012747+08:00 58194 [Note] Start semi-sync binlog_dump to slave (server_id: 100), pos(, 4)

[feature] Backup to nfs server , restore from nfs server

Is your feature request related to a problem? Please describe.

Describe the solution you'd like
Make a backup like this:
add feild BackupToNFS specify the nfs server in yaml file, then

kubectl apply -f mysql_v1alpha1_backup.yaml 

Can backup to Nfs server,
add feild RestoreFromNFS specify the nfs server in yaml file, then
it can restore cluster by:

kubectl apply -f mysql_v1alpha1_cluster.yaml 

Just add

Describe alternatives you've considered

Additional context

[feature] Support volume resize

Is your feature request related to a problem? Please describe.

primitive: support nodes expand or reduce disk capacity (Manual)

advanced: support automatic expansion capacity

Describe the solution you'd like

Describe alternatives you've considered

Additional context

[feature] add operator sidecar

Is your feature request related to a problem? Please describe.

Describe the solution you'd like

To process some operators as a init container.

Describe alternatives you've considered

Additional context

Failed: helm install

kubectl get pods
NAME              READY   STATUS             RESTARTS   AGE
mysql-krypton-0   2/3     CrashLoopBackOff   6          13m
Initializing database
2021-03-16T17:33:55.774963+08:00 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2021-03-16T17:33:55.776426+08:00 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.
2021-03-16T17:33:55.776454+08:00 0 [ERROR] Aborting

[k8s]Hang: when run cmd: kubectl delete pv

Summary

root@i-2xctojb9:~/krypton-helm# kubectl get pv |grep krypton
pvc-61bf00bd-0395-4c0d-ab2a-51dd612c496e   10Gi       RWO            Delete           Terminating   default/data-test-krypton-2                                        csi-standard            7d2h

root@i-2xctojb9:~/krypton-helm#kubectl delete pv pvc-61bf00bd-0395-4c0d-ab2a-51dd612c496e --grace-period=0 --force

1min
5min
...
Hang

sysbench FATAL: mysql_stmt_prepare() failed

1、准备
sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-port=30160 --mysql-db=test_db --mysql-user=usr --mysql-password=123456 --table_size=400000 --tables=8 --threads=256 --events=0 --report-interval=10 --time=600 --mysql-host=192.168.0.3 --table_size=400000 prepare
2、运行
sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-port=30160 --mysql-db=test_db --mysql-user=usr --mysql-password=123456 --table_size=400000 --tables=8 --threads=256 --events=0 --report-interval=10 --time=600 --mysql-host=192.168.0.3 --table_size=400000 run
max\u prepared\u stmt\u count个语句(当前值:16382)会报错

FATAL: mysql_stmt_prepare() failed
FATAL: MySQL error: 1461 "Can't create more than max_prepared_stmt_count statements (current value: 16382)"
FATAL: `thread_init' function failed: /usr/share/sysbench/oltp_common.lua:275: SQL API error
FATAL: mysql_stmt_prepare() failed

sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-port=30160 --mysql-db=test_db --mysql-user=usr --mysql-password=123456 --table_size=400000 --tables=8 --threads=256 --events=0 --report-interval=10 --time=600 --mysql-host=192.168.0.3 --table_size=400000 cleanup

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.