Comments (8)
Same error for heketi-cli volume create --size=4
.
from gluster-kubernetes.
This is the corresponding output of kubectl logs -f deploy-heketi-543317244-8a2po
:
[negroni] Started GET /clusters
[negroni] Completed 200 OK in 279.481µs
[negroni] Started GET /clusters/645be219ee6b0598b4d51458f2c82a12
[negroni] Completed 200 OK in 768.279µs
[negroni] Started POST /volumes
[negroni] Completed 202 Accepted in 511.664µs
[asynchttp] INFO 2016/11/16 14:33:26 Started job ad6af2614f4464ded4cdb21a5e5e13d0
[heketi] INFO 2016/11/16 14:33:26 Creating volume e29cbf81c2032ccadb78d2f9b7644798
[heketi] INFO 2016/11/16 14:33:26 brick_num: 0
[heketi] INFO 2016/11/16 14:33:27 Creating brick c3acc2d22c54dba073842beaf55da352
[heketi] INFO 2016/11/16 14:33:27 Creating brick a7cd48e117b3bdb30396f6714ed79058
[heketi] INFO 2016/11/16 14:33:27 Creating brick 150e1e80cac3a99bf245d519095ceb14
[negroni] Started GET /queue/ad6af2614f4464ded4cdb21a5e5e13d0
[negroni] Completed 200 OK in 129.866µs
[kubeexec] DEBUG 2016/11/16 14:33:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs2-1562006718-4d2io Command: mkdir -p /var/lib/heketi/mounts/vg_7b8fbfe3ad7de9c825f082f91d0bf6ac/brick_a7cd48e117b3bdb30396f6714ed79058
Result:
[kubeexec] DEBUG 2016/11/16 14:33:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs0-2272744551-a4ghp Command: mkdir -p /var/lib/heketi/mounts/vg_a19f21522ad62a555ce29fcfa374019c/brick_150e1e80cac3a99bf245d519095ceb14
Result:
[kubeexec] DEBUG 2016/11/16 14:33:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs1-1373000839-qq9jv Command: mkdir -p /var/lib/heketi/mounts/vg_71227ba841eb6ca845fb4315fe011b2c/brick_c3acc2d22c54dba073842beaf55da352
Result:
[negroni] Started GET /queue/ad6af2614f4464ded4cdb21a5e5e13d0
[negroni] Completed 200 OK in 117.194µs
[kubeexec] DEBUG 2016/11/16 14:33:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs1-1373000839-qq9jv Command: lvcreate --poolmetadatasize 167936K -c 256K -L 33554432K -T vg_71227ba841eb6ca845fb4315fe011b2c/tp_c3acc2d22c54dba073842beaf55da352 -V 33554432K -n brick_c3acc2d22c54dba073842beaf55da352
Result: Logical volume "brick_c3acc2d22c54dba073842beaf55da352" created.
[kubeexec] DEBUG 2016/11/16 14:33:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs2-1562006718-4d2io Command: lvcreate --poolmetadatasize 167936K -c 256K -L 33554432K -T vg_7b8fbfe3ad7de9c825f082f91d0bf6ac/tp_a7cd48e117b3bdb30396f6714ed79058 -V 33554432K -n brick_a7cd48e117b3bdb30396f6714ed79058
Result: Logical volume "brick_a7cd48e117b3bdb30396f6714ed79058" created.
[kubeexec] DEBUG 2016/11/16 14:33:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs0-2272744551-a4ghp Command: lvcreate --poolmetadatasize 167936K -c 256K -L 33554432K -T vg_a19f21522ad62a555ce29fcfa374019c/tp_150e1e80cac3a99bf245d519095ceb14 -V 33554432K -n brick_150e1e80cac3a99bf245d519095ceb14
Result: Logical volume "brick_150e1e80cac3a99bf245d519095ceb14" created.
[negroni] Started GET /queue/ad6af2614f4464ded4cdb21a5e5e13d0
[negroni] Completed 200 OK in 160.282µs
[kubeexec] DEBUG 2016/11/16 14:33:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs1-1373000839-qq9jv Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_71227ba841eb6ca845fb4315fe011b2c-brick_c3acc2d22c54dba073842beaf55da352
Result: meta-data=/dev/mapper/vg_71227ba841eb6ca845fb4315fe011b2c-brick_c3acc2d22c54dba073842beaf55da352 isize=512 agcount=16, agsize=524224 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=8387584, imaxpct=25
= sunit=64 swidth=64 blks
naming =version 2 bsize=8192 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=4096, version=2
= sectsz=512 sunit=64 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[kubeexec] DEBUG 2016/11/16 14:33:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs2-1562006718-4d2io Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_7b8fbfe3ad7de9c825f082f91d0bf6ac-brick_a7cd48e117b3bdb30396f6714ed79058
Result: meta-data=/dev/mapper/vg_7b8fbfe3ad7de9c825f082f91d0bf6ac-brick_a7cd48e117b3bdb30396f6714ed79058 isize=512 agcount=16, agsize=524224 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=8387584, imaxpct=25
= sunit=64 swidth=64 blks
naming =version 2 bsize=8192 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=4096, version=2
= sectsz=512 sunit=64 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[kubeexec] DEBUG 2016/11/16 14:33:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs0-2272744551-a4ghp Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_a19f21522ad62a555ce29fcfa374019c-brick_150e1e80cac3a99bf245d519095ceb14
Result: meta-data=/dev/mapper/vg_a19f21522ad62a555ce29fcfa374019c-brick_150e1e80cac3a99bf245d519095ceb14 isize=512 agcount=16, agsize=524224 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=8387584, imaxpct=25
= sunit=64 swidth=64 blks
naming =version 2 bsize=8192 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=4096, version=2
= sectsz=512 sunit=64 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[kubeexec] DEBUG 2016/11/16 14:33:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs1-1373000839-qq9jv Command: echo "/dev/mapper/vg_71227ba841eb6ca845fb4315fe011b2c-brick_c3acc2d22c54dba073842beaf55da352 /var/lib/heketi/mounts/vg_71227ba841eb6ca845fb4315fe011b2c/brick_c3acc2d22c54dba073842beaf55da352 xfs rw,inode64,noatime,nouuid 1 2" | tee -a /var/lib/heketi/fstab > /dev/null
Result:
[kubeexec] DEBUG 2016/11/16 14:33:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs2-1562006718-4d2io Command: echo "/dev/mapper/vg_7b8fbfe3ad7de9c825f082f91d0bf6ac-brick_a7cd48e117b3bdb30396f6714ed79058 /var/lib/heketi/mounts/vg_7b8fbfe3ad7de9c825f082f91d0bf6ac/brick_a7cd48e117b3bdb30396f6714ed79058 xfs rw,inode64,noatime,nouuid 1 2" | tee -a /var/lib/heketi/fstab > /dev/null
Result:
[kubeexec] DEBUG 2016/11/16 14:33:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs0-2272744551-a4ghp Command: echo "/dev/mapper/vg_a19f21522ad62a555ce29fcfa374019c-brick_150e1e80cac3a99bf245d519095ceb14 /var/lib/heketi/mounts/vg_a19f21522ad62a555ce29fcfa374019c/brick_150e1e80cac3a99bf245d519095ceb14 xfs rw,inode64,noatime,nouuid 1 2" | tee -a /var/lib/heketi/fstab > /dev/null
Result:
[kubeexec] DEBUG 2016/11/16 14:33:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs2-1562006718-4d2io Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_7b8fbfe3ad7de9c825f082f91d0bf6ac-brick_a7cd48e117b3bdb30396f6714ed79058 /var/lib/heketi/mounts/vg_7b8fbfe3ad7de9c825f082f91d0bf6ac/brick_a7cd48e117b3bdb30396f6714ed79058
Result:
[kubeexec] DEBUG 2016/11/16 14:33:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs1-1373000839-qq9jv Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_71227ba841eb6ca845fb4315fe011b2c-brick_c3acc2d22c54dba073842beaf55da352 /var/lib/heketi/mounts/vg_71227ba841eb6ca845fb4315fe011b2c/brick_c3acc2d22c54dba073842beaf55da352
Result:
[kubeexec] DEBUG 2016/11/16 14:33:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs0-2272744551-a4ghp Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_a19f21522ad62a555ce29fcfa374019c-brick_150e1e80cac3a99bf245d519095ceb14 /var/lib/heketi/mounts/vg_a19f21522ad62a555ce29fcfa374019c/brick_150e1e80cac3a99bf245d519095ceb14
Result:
[kubeexec] DEBUG 2016/11/16 14:33:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs2-1562006718-4d2io Command: mkdir /var/lib/heketi/mounts/vg_7b8fbfe3ad7de9c825f082f91d0bf6ac/brick_a7cd48e117b3bdb30396f6714ed79058/brick
Result:
[negroni] Started GET /queue/ad6af2614f4464ded4cdb21a5e5e13d0
[negroni] Completed 200 OK in 77.535µs
[kubeexec] DEBUG 2016/11/16 14:33:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs1-1373000839-qq9jv Command: mkdir /var/lib/heketi/mounts/vg_71227ba841eb6ca845fb4315fe011b2c/brick_c3acc2d22c54dba073842beaf55da352/brick
Result:
[kubeexec] DEBUG 2016/11/16 14:33:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs0-2272744551-a4ghp Command: mkdir /var/lib/heketi/mounts/vg_a19f21522ad62a555ce29fcfa374019c/brick_150e1e80cac3a99bf245d519095ceb14/brick
Result:
[sshexec] INFO 2016/11/16 14:33:30 Creating volume heketidbstorage replica 3
[kubeexec] ERROR 2016/11/16 14:33:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:328: Failed to run command [gluster --mode=script volume create heketidbstorage replica 3 f6e5fcaf-35bf-424b-a2f9-900d3d1a9b11.pub.cloud.scaleway.com:/var/lib/heketi/mounts/vg_7b8fbfe3ad7de9c825f082f91d0bf6ac/brick_a7cd48e117b3bdb30396f6714ed79058/brick 220d4345-ea09-4ba3-bf8e-bc2c86bc821c.pub.cloud.scaleway.com:/var/lib/heketi/mounts/vg_71227ba841eb6ca845fb4315fe011b2c/brick_c3acc2d22c54dba073842beaf55da352/brick 120fa67f-b5fe-4232-8c77-0c78e1c1c8ce.pub.cloud.scaleway.com:/var/lib/heketi/mounts/vg_a19f21522ad62a555ce29fcfa374019c/brick_150e1e80cac3a99bf245d519095ceb14/brick] on glusterfs2-1562006718-4d2io: Err[error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1]: Stdout []: Stderr [volume create: heketidbstorage: failed: Host f6e5fcaf-35bf-424b-a2f9-900d3d1a9b11.pub.cloud.scaleway.com is not in ' Peer in Cluster' state
]
[negroni] Started GET /queue/ad6af2614f4464ded4cdb21a5e5e13d0
[negroni] Completed 200 OK in 102.477µs
[kubeexec] ERROR 2016/11/16 14:33:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:328: Failed to run command [gluster --mode=script volume stop heketidbstorage force] on glusterfs2-1562006718-4d2io: Err[error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1]: Stdout []: Stderr [volume stop: heketidbstorage: failed: Volume heketidbstorage does not exist
]
[sshexec] ERROR 2016/11/16 14:33:31 /src/github.com/heketi/heketi/executors/sshexec/volume.go:146: Unable to stop volume heketidbstorage: Unable to execute command on glusterfs2-1562006718-4d2io: volume stop: heketidbstorage: failed: Volume heketidbstorage does not exist
[kubeexec] ERROR 2016/11/16 14:33:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:328: Failed to run command [gluster --mode=script volume delete heketidbstorage] on glusterfs2-1562006718-4d2io: Err[error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1]: Stdout []: Stderr [volume delete: heketidbstorage: failed: Volume heketidbstorage does not exist
]
[sshexec] ERROR 2016/11/16 14:33:32 /src/github.com/heketi/heketi/executors/sshexec/volume.go:158: Unable to delete volume heketidbstorage: Unable to execute command on glusterfs2-1562006718-4d2io: volume delete: heketidbstorage: failed: Volume heketidbstorage does not exist
[heketi] INFO 2016/11/16 14:33:32 Deleting brick a7cd48e117b3bdb30396f6714ed79058
[heketi] INFO 2016/11/16 14:33:32 Deleting brick c3acc2d22c54dba073842beaf55da352
[heketi] INFO 2016/11/16 14:33:32 Deleting brick 150e1e80cac3a99bf245d519095ceb14
[kubeexec] DEBUG 2016/11/16 14:33:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs1-1373000839-qq9jv Command: umount /var/lib/heketi/mounts/vg_71227ba841eb6ca845fb4315fe011b2c/brick_c3acc2d22c54dba073842beaf55da352
Result:
[kubeexec] DEBUG 2016/11/16 14:33:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs2-1562006718-4d2io Command: umount /var/lib/heketi/mounts/vg_7b8fbfe3ad7de9c825f082f91d0bf6ac/brick_a7cd48e117b3bdb30396f6714ed79058
Result:
[negroni] Started GET /queue/ad6af2614f4464ded4cdb21a5e5e13d0
[negroni] Completed 200 OK in 73.856µs
[kubeexec] DEBUG 2016/11/16 14:33:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs0-2272744551-a4ghp Command: umount /var/lib/heketi/mounts/vg_a19f21522ad62a555ce29fcfa374019c/brick_150e1e80cac3a99bf245d519095ceb14
Result:
[negroni] Started GET /queue/ad6af2614f4464ded4cdb21a5e5e13d0
[negroni] Completed 200 OK in 108.532µs
[kubeexec] DEBUG 2016/11/16 14:33:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs1-1373000839-qq9jv Command: lvremove -f vg_71227ba841eb6ca845fb4315fe011b2c/tp_c3acc2d22c54dba073842beaf55da352
Result: Logical volume "brick_c3acc2d22c54dba073842beaf55da352" successfully removed
Logical volume "tp_c3acc2d22c54dba073842beaf55da352" successfully removed
[kubeexec] DEBUG 2016/11/16 14:33:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs2-1562006718-4d2io Command: lvremove -f vg_7b8fbfe3ad7de9c825f082f91d0bf6ac/tp_a7cd48e117b3bdb30396f6714ed79058
Result: Logical volume "brick_a7cd48e117b3bdb30396f6714ed79058" successfully removed
Logical volume "tp_a7cd48e117b3bdb30396f6714ed79058" successfully removed
[kubeexec] DEBUG 2016/11/16 14:33:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs0-2272744551-a4ghp Command: lvremove -f vg_a19f21522ad62a555ce29fcfa374019c/tp_150e1e80cac3a99bf245d519095ceb14
Result: Logical volume "brick_150e1e80cac3a99bf245d519095ceb14" successfully removed
Logical volume "tp_150e1e80cac3a99bf245d519095ceb14" successfully removed
[kubeexec] DEBUG 2016/11/16 14:33:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs1-1373000839-qq9jv Command: rmdir /var/lib/heketi/mounts/vg_71227ba841eb6ca845fb4315fe011b2c/brick_c3acc2d22c54dba073842beaf55da352
Result:
[kubeexec] DEBUG 2016/11/16 14:33:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs2-1562006718-4d2io Command: rmdir /var/lib/heketi/mounts/vg_7b8fbfe3ad7de9c825f082f91d0bf6ac/brick_a7cd48e117b3bdb30396f6714ed79058
Result:
[kubeexec] DEBUG 2016/11/16 14:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs0-2272744551-a4ghp Command: rmdir /var/lib/heketi/mounts/vg_a19f21522ad62a555ce29fcfa374019c/brick_150e1e80cac3a99bf245d519095ceb14
Result:
[kubeexec] DEBUG 2016/11/16 14:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs1-1373000839-qq9jv Command: sed -i.save "/brick_c3acc2d22c54dba073842beaf55da352/d" /var/lib/heketi/fstab
Result:
[kubeexec] DEBUG 2016/11/16 14:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs2-1562006718-4d2io Command: sed -i.save "/brick_a7cd48e117b3bdb30396f6714ed79058/d" /var/lib/heketi/fstab
Result:
[kubeexec] DEBUG 2016/11/16 14:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:331: Host: glusterfs0-2272744551-a4ghp Command: sed -i.save "/brick_150e1e80cac3a99bf245d519095ceb14/d" /var/lib/heketi/fstab
Result:
[heketi] ERROR 2016/11/16 14:33:34 /src/github.com/heketi/heketi/apps/glusterfs/app_volume.go:150: Failed to create volume: Unable to execute command on glusterfs2-1562006718-4d2io: volume create: heketidbstorage: failed: Host f6e5fcaf-35bf-424b-a2f9-900d3d1a9b11.pub.cloud.scaleway.com is not in ' Peer in Cluster' state
[asynchttp] INFO 2016/11/16 14:33:34 Completed job ad6af2614f4464ded4cdb21a5e5e13d0 in 7.372856513s
[negroni] Started GET /queue/ad6af2614f4464ded4cdb21a5e5e13d0
[negroni] Completed 500 Internal Server Error in 114.456µs
from gluster-kubernetes.
Ok, might be something about this:
$ kubectl exec -ti glusterfs0-2272744551-a4ghp gluster peer status
Number of Peers: 2
Hostname: 220d4345-ea09-4ba3-bf8e-bc2c86bc821c.pub.cloud.scaleway.com
Uuid: fd499492-e26f-4c3f-919a-57ebced1439a
State: Accepted peer request (Connected)
Hostname: f6e5fcaf-35bf-424b-a2f9-900d3d1a9b11.pub.cloud.scaleway.com
Uuid: 8ba60735-809e-4ecc-aafd-6d7da3fc3fef
State: Accepted peer request (Connected)
$ kubectl exec -ti glusterfs1-1373000839-qq9jv gluster peer status
Number of Peers: 1
Hostname: 84-158-172-163.rev.cloud.scaleway.com
Uuid: badcc3c1-7f06-4b5a-9725-5e4d23152205
State: Accepted peer request (Disconnected)
$ kubectl exec -ti glusterfs2-1562006718-4d2io gluster peer status
Number of Peers: 1
Hostname: 84-158-172-163.rev.cloud.scaleway.com
Uuid: badcc3c1-7f06-4b5a-9725-5e4d23152205
State: Accepted peer request (Disconnected)
from gluster-kubernetes.
Can you also provide these to understand better.
# kubectl get nodes --show-labels
# kubectl get pods -o wide
In topology file, Storage hostname must be IP of the node where the glusterfs is running.
I am not completely sure of what you are facing, I feel this is gluster error, this might help(http://www.gluster.org/pipermail/gluster-users/2014-February/016186.html).
from gluster-kubernetes.
Ok, in fact I did not use IP addresses, but cluster-wide resolvable domain names. I saw the note on the docs about using IPs there, but the example below shows a domain. Maybe that should be changed. And also an explanation on this would be nice. Is this a temporary constraint?
When playing around with domain names, two of three nodes could connect. But the third tried to connect to some reverse lookup address or so.
from gluster-kubernetes.
If I'm reading this right, the current ask would be to document where and why to use IPs vs hostnames? Is this still a request with the new deploy script?
from gluster-kubernetes.
Howdy! I'm gonna give this issue two weeks from when you opened it before just closing it for inactivity, so please respond within the next three days or so. :)
from gluster-kubernetes.
Yes, this would just be the ask to document where and why to use IPs instead of hostnames.
from gluster-kubernetes.
Related Issues (20)
- Error: Failed to allocate new volume: No space HOT 2
- Unable to access db HOT 1
- missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec HOT 1
- Unable to deploy on Ubuntu 18.04 -> pods not found. HOT 6
- Request to structure the README to include more projects
- 401 with latest heketi:dev image HOT 1
- probe failed
- glusterFS pod deploy failing
- PVC in pending status-no other error HOT 2
- Mount failed:E [glusterfsd.c:795:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists HOT 5
- Pod devices for topology get stuck, if pods are restarted.
- vagrant - failed to install glusterfs-client
- is this project still ALIVE? HOT 9
- Which gluster node are my pods/pvc talking to?
- Error waiting for job 'heketi-storage-copy-job' to complete HOT 4
- Kubernetes DaemonSet extensions/v1beta1 deprecated
- speed up deploying gluster
- heketi deployment has CrashLoopBackOff state, I am using ./gk-deploy script
- glusterfs on kubernetes stock on Error waiting for job 'heketi-storage-copy-job' to complete. stage
- About GlusterFs containers support geographic replication
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gluster-kubernetes.