morphy2k / k8s-mongo-sidecar Goto Github PK
View Code? Open in Web Editor NEWKubernetes sidecar for MongoDB
License: MIT License
Kubernetes sidecar for MongoDB
License: MIT License
Dependabot couldn't find a .yml for this project.
Dependabot requires a .yml to evaluate your project's current Github_actions dependencies. It had expected to find one at the path: /.github/workflows/<anything>.yml
.
If this isn't a Github_actions project, or if it is a library, you may wish to disable updates for it from within Dependabot.
You can mention @dependabot in the comments below to contact the Dependabot team.
Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: mongo-0.mongo.mongo.svc.cluster.local:27017; the following nodes did not respond affirmatively: mongo-2.mongo.mongo.svc.cluster.local:27017 failed with command replSetHeartbeat requires authentication, mongo-1.mongo.mongo.svc.cluster.local:27017 failed with command replSetHeartbeat requires authentication
During pod creating, the following error happened
[email protected] start /opt/k8s-mongo-sidecar
node src/index.jsUsing mongo port: 27017
Starting up k8s-mongo-sidecar
The cluster domain 'cluster.local' was successfully verified.
Error trying to initialize k8s-mongo-sidecar Error: Failed to get /openapi/v2 and /swagger.json: Created Component, but require templated one. This is a bug. Please report: https://github.com/silasbw/fluent-openapi/issues
at /opt/k8s-mongo-sidecar/node_modules/kubernetes-client/lib/swagger-client.js:58:15
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at async Object.init (/opt/k8s-mongo-sidecar/src/lib/k8s.js:17:12)
at async Object.init (/opt/k8s-mongo-sidecar/src/lib/worker.js:33:9)
related issues:
swagger-fluent, issue55
kubernetes-external-secrets, issue543
Hello!
I use minikube and your example configuration for mongo statefulsets.
First pod, which is mongo-0 works fine and become rs.0:primary. But i can see no errors in sidecarlogs:
Starting up k8s-mongo-sidecar
The cluster domain 'cluster.local' was successfully verified.
(node:19) DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
And no more logs or any actions in sidecar.
I can see constant connections on mongodb logs:
2020-06-02T21:07:15.048+0000 I NETWORK [listener] connection accepted from 172.18.0.16:60150 #65 (2 connections now open)
2020-06-02T21:07:15.048+0000 I NETWORK [conn65] received client metadata from 172.18.0.16:60150 conn65: { driver: { name: "nodejs", version: "3.5.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-33-generic" }, platform: "'Node.js v12.16.2, LE (legacy)" }
2020-06-02T21:07:15.059+0000 I NETWORK [conn65] end connection 172.18.0.16:60150 (1 connection now open)
2020-06-02T21:07:15.059+0000 I NETWORK [conn64] end connection 127.0.0.1:58206 (0 connections now open)
So cluster does not form.
Do you know where issue can be ?
Kubernetes conf:
- name: mongo-sidecar
image: morphy/k8s-mongo-sidecar
env:
- name: KUBERNETES_POD_LABELS
value: "role=mongo,environment=test"
- name: KUBERNETES_SERVICE_NAME
value: "mongo"
And in logs i can see that:
Error in workloop: Error: pods is forbidden: User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" at the cluster scope
No matter what i put into env variable, sidecar trying to use default acc instead of mongo.
Same as @PuKoren with this yaml file.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo-statefulset
spec:
serviceName: mongo
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo:4
command:
- mongod
args:
- '--replSet=rs0'
- '--bind_ip=0.0.0.0'
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: morphy/k8s-mongo-sidecar
env:
- name: KUBERNETES_POD_LABELS
value: 'role=mongo, environment=test'
- name: KUBERNETES_SERVICE_NAME
value: 'mongo'
selector:
matchLabels:
role: mongo
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: do-block-storage
Originally posted by @wlopez-cl in #1 (comment)
I have a no replset config has been received
message when I try to deploy the cluster on my K8s. I don't know why it happens. Here is my Yaml:
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: production
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
namespace: production
spec:
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
role: mongo
environment: production
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet=rs0"
- "--smallfiles"
- "--noprealloc"
- "--bind_ip=0.0.0.0"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
resources:
requests:
memory: "1G"
cpu: "0.5"
limits:
memory: "2G"
cpu: "2"
- name: mongo-sidecar
image: morphy/k8s-mongo-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=production"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "ssd"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 25Gi
Is there anything I obviously did wrong ?
Thanks for your help
rs.status()
{
"operationTime" : Timestamp(0, 0),
"ok" : 0,
"errmsg" : "no replset config has been received",
"code" : 94,
"codeName" : "NotYetInitialized"
}
We need to enable IPv6 only for our cluster and got an error when creating a replica set.
containers:
- name: mongo
image: mongo:4
imagePullPolicy: IfNotPresent
command:
- mongod
args:
- "--replSet=rs0"
- "--bind_ip=::"
...
- "--ipv6"
It creates primary on the mongo-0 replica, but other sidecar container crashes
> [email protected] start /opt/k8s-mongo-sidecar
> node src/index.js
Using mongo port: 27017
Starting up k8s-mongo-sidecar
The cluster domain 'cluster.local' was successfully verified.
Error trying to initialize k8s-mongo-sidecar Error: Failed to get /openapi/v2 and /swagger.json: getaddrinfo ENOTFOUND fd08
at /opt/k8s-mongo-sidecar/node_modules/kubernetes-client/lib/swagger-client.js:58:15
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at async Object.init (/opt/k8s-mongo-sidecar/src/lib/k8s.js:17:12)
at async Object.init (/opt/k8s-mongo-sidecar/src/lib/worker.js:33:9)
Do we have additional parameters for working with IPv6?
Just got this setup (thank you for your effort!) and my replicas won't join under any circumstances. When I start with replicas=3, I get the error from the title spewed every 5 seconds. Here's why, according to Mongo docs:
Reconfiguration Can Add or Remove No More than One Voting Member at a Time
Starting in MongoDB 4.4, rs.reconfig() by default allows adding or removing no more than 1 voting member at a time. For example, a new configuration can make at most one of the following changes to the cluster membership:Adding a new voting replica set member.
Removing an existing voting replica set member.
Modifying the votes for an existing replica set member.
To add or remove multiple voting members, issue a series of rs.reconfig() operations to add or remove one member at a time.Issuing a force reconfiguration immediately installs the new configuration even if it adds or removes multiple voting members. Force reconfiguration can cause unexpected behavior, such as the rollback of "majority" committed write operations.
Looks like the replica script has to only make one change per update now.
support mongo 4.4.+
When I mount NFS Persistent Volume at /data/db
replica sets are not created. Here is the error. Any idea how to run statefulset with persistent volume of nfs type? Thanks
$ kubectl logs -f mongo-1 -c mongo
2019-08-14T21:08:14.940+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-08-14T21:08:14.948+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=mongo-1
2019-08-14T21:08:14.948+0000 I CONTROL [initandlisten] db version v4.2.0
2019-08-14T21:08:14.948+0000 I CONTROL [initandlisten] git version: a4b751dcf51dd249c5865812b390cfd1c0129c30
2019-08-14T21:08:14.948+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018
2019-08-14T21:08:14.948+0000 I CONTROL [initandlisten] allocator: tcmalloc
2019-08-14T21:08:14.948+0000 I CONTROL [initandlisten] modules: none
2019-08-14T21:08:14.948+0000 I CONTROL [initandlisten] build environment:
2019-08-14T21:08:14.948+0000 I CONTROL [initandlisten] distmod: ubuntu1804
2019-08-14T21:08:14.948+0000 I CONTROL [initandlisten] distarch: x86_64
2019-08-14T21:08:14.948+0000 I CONTROL [initandlisten] target_arch: x86_64
2019-08-14T21:08:14.948+0000 I CONTROL [initandlisten] options: { replication: { replSet: "rs0" } }
2019-08-14T21:08:14.957+0000 I STORAGE [initandlisten] exception in initAndListen: DBPathInUse: Unable to lock the lock file: /data/db/mongod.lock (Resource temporarily unavailable). Another mongod instance is already running on the /data/db directory, terminating
2019-08-14T21:08:14.957+0000 I NETWORK [initandlisten] shutdown: going to close listening sockets...
2019-08-14T21:08:14.957+0000 I NETWORK [initandlisten] removing socket file: /tmp/mongodb-27017.sock
2019-08-14T21:08:14.958+0000 I - [initandlisten] Stopping further Flow Control ticket acquisitions.
2019-08-14T21:08:14.958+0000 I CONTROL [initandlisten] now exiting
2019-08-14T21:08:14.958+0000 I CONTROL [initandlisten] shutting down with code:100
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: phenex-nfs-mongo
namespace: phenex
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
nfs:
server: master
path: /nfs/data/phenex/production/permastore/mongo
claimRef:
name: phenex-nfs-mongo
namespace: phenex
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: phenex-nfs-mongo
namespace: phenex
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
namespace: phenex
spec:
serviceName: mongo
replicas: 3
selector:
matchLabels:
run: mongo
tier: backend
deployment: production
template:
metadata:
labels:
run: mongo
tier: backend
deployment: production
spec:
serviceAccountName: mongo
automountServiceAccountToken: true
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo:4.2.0-bionic
command:
- mongod
args:
- "--replSet=rs0"
- "--bind_ip=0.0.0.0"
ports:
- containerPort: 27017
volumeMounts:
- name: phenex-nfs-mongo
mountPath: /data/db
- name: mongo-sidecar
image: morphy/k8s-mongo-sidecar
env:
- name: KUBERNETES_POD_LABELS
value: "run=mongo"
- name: KUBERNETES_SERVICE_NAME
value: mongo
volumes:
- name: phenex-nfs-mongo
persistentVolumeClaim:
claimName: phenex-nfs-mongo
Testing out k8s 1.19.2 and getting the below error, worked on a previous version of kubernetes.
Thanks in advance for any help.
$ logs mongo-0 mongo-sidecar
> [email protected] start /opt/k8s-mongo-sidecar
> node src/index.js
Using mongo port: 27017
Starting up k8s-mongo-sidecar
The cluster domain 'cluster.local' was successfully verified.
Error in workloop: Error: namespaces "pods" is forbidden: User "system:serviceaccount:default:mongo" cannot get resource "namespaces" in API group "" in the namespace "pods"
at /opt/k8s-mongo-sidecar/node_modules/kubernetes-client/backends/request/client.js:231:25
at Request._callback (/opt/k8s-mongo-sidecar/node_modules/kubernetes-client/backends/request/client.js:168:14)
at Request.self.callback (/opt/k8s-mongo-sidecar/node_modules/request/request.js:185:22)
at Request.emit (events.js:310:20)
at Request.<anonymous> (/opt/k8s-mongo-sidecar/node_modules/request/request.js:1154:10)
at Request.emit (events.js:310:20)
at IncomingMessage.<anonymous> (/opt/k8s-mongo-sidecar/node_modules/request/request.js:1076:12)
at Object.onceWrapper (events.js:416:28)
at IncomingMessage.emit (events.js:322:22)
at endReadableNT (_stream_readable.js:1187:12) {
code: 403,
statusCode: 403
}
When I start with 1 replicaset it becomes primary and everything works fine, I then add a few more members and the sidecar picks them up and adds them to the list but when I scale back downt the the members aren't being removed from the list and all members become secondary when there is no quorum to hold elections.
I was following the example to configure a statefulset deployment, however, with my actual config I'm seeing the pod running 2/2 but when I connect to the first replica I'm expecting to it being master, in my case, it's not.
I'm using minikube.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: auth-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
# # RBAC for Auth
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: default-view
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: default
namespace: default
---
# # Stateful Config
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: auth-mongo
spec:
selector:
matchLabels:
role: auth-mongo
app: auth-mongo
serviceName: "auth-mongo-srv"
replicas: 2
template:
metadata:
labels:
app: auth-mongo
role: auth-mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: auth-mongo
image: mongo
command:
- mongod
- "--bind_ip_all"
- "--replSet"
- rs0
ports:
- containerPort: 27017
name: auth-db
volumeMounts:
- name: mongo-storage
mountPath: /data/auth-db
- name: auth-mongo-sidecar
image: morphy/k8s-mongo-sidecar
env:
- name: KUBERNETES_POD_LABELS
value: "app=auth-mongo,environment=test"
- name: KUBERNETES_SERVICE_NAME
value: "auth-mongo-srv"
volumeClaimTemplates:
- metadata:
name: mongo-storage
annotations:
volume.beta.kubernetes.io/storage-class: "auth-storage"
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
# # Auth Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: auth-db
protocol: TCP
port: 27017
targetPort: 27017
This is the ouput for kubectl exec -ti auth-mongo-0 --mongosh
Defaulted container "auth-mongo" out of: auth-mongo, auth-mongo-sidecar
Current Mongosh Log ID: 6395d4f683739afc691b67a4
Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.6.1
Using MongoDB: 6.0.3
Using Mongosh: 1.6.1
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.
------
The server generated these startup warnings when booting
2022-12-11T13:02:33.413+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
2022-12-11T13:02:33.551+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
2022-12-11T13:02:33.551+00:00: You are running this process as the root user, which is not recommended
2022-12-11T13:02:33.551+00:00: vm.max_map_count is too low
------
------
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
------
test>
All the pods are running: kubectl get pods
NAME READY STATUS RESTARTS AGE
auth-mongo-0 2/2 Running 0 79s
auth-mongo-1 2/2 Running 0 72s
And my services kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-mongo-srv ClusterIP 10.99.75.54 <none> 27017/TCP 2m45s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17d
Thanks for you help !
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.