fstab / cifs Goto Github PK
View Code? Open in Web Editor NEWCIFS Flexvolume Plugin for Kubernetes
License: MIT License
CIFS Flexvolume Plugin for Kubernetes
License: MIT License
How to install fstab/cifs in docker desktop kubernetes server for windows?
I couldn't not able to find /usr/libexec/kubernetes/kubelet-plugins/volume/exec path in docker desktop cluster.
You username and repo name combination win the internet for this week...
I am wondering if you can tell me what I need to set to allow InnoDB permissions to the Persistent Volume using fstab/cifs. Data is successfully written to the cifs share but when the pod spins up, it errors out because InnoDB doesn't appear to have permissions. The front end pod spins up without any issues. What is the best way to resolve this or is it an issue with the FlexVolume driver?
apiVersion: v1
kind: PersistentVolume
metadata:
name: mariadb-pv
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
flexVolume:
driver: "fstab/cifs"
fsType: "cifs"
readOnly: false
secretRef:
name: "cifs-secret"
options:
networkPath: "\\10.0.0.165\moshpit-str2\Kubernetes\mariadb\"
mountOptions: "dir_mode=0755,file_mode=0755,noperm"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mariadb-pvc
namespace: drupal
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
kind: Deployment
apiVersion: apps/v1
metadata:
name: mariadb
namespace: drupal
spec:
replicas: 1
selector:
matchLabels:
app: mariadb
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb:latest
ports:
- containerPort: 3306
env:
- name: MYSQL_DATABASE
value: xxx
- name: MYSQL_USER
value: xxx
- name: MYSQL_PASSWORD
value: xxx
- name: MYSQL_RANDOM_ROOT_PASSWORD
value: 'yes'
volumeMounts:
- mountPath: /var/lib/mysql/
name: database
resources:
limits:
cpu: '2'
memory: '512Mi'
requests:
cpu: '500m'
memory: '256Mi'
volumes:
- name: database
persistentVolumeClaim:
claimName: mariadb-pvc
kind: Service
apiVersion: v1
metadata:
name: mariadb
namespace: drupal
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mariadb
Mysql Pod Logs
2021-04-23 00:32:34+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 1:10.5.9+mariafocal started.focal started.
2021-04-23 00:32:34+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2021-04-23 00:32:34+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 1:10.5.9+maria
2021-04-23 0:32:35 0 [Note] mysqld (mysqld 10.5.9-MariaDB-1:10.5.9+maria~focal) starting as process 1 ...
2021-04-23 0:32:35 0 [Note] InnoDB: Uses event mutexes
2021-04-23 0:32:35 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2021-04-23 0:32:35 0 [Note] InnoDB: Number of pools: 1
2021-04-23 0:32:35 0 [Note] InnoDB: Using ARMv8 crc32 instructions
2021-04-23 0:32:35 0 [Note] mysqld: O_TMPFILE is not supported on /tmp (disabling future attempts)
2021-04-23 0:32:35 0 [Note] InnoDB: Using Linux native AIO
2021-04-23 0:32:35 0 [Note] InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728
2021-04-23 0:32:35 0 [Note] InnoDB: Completed initialization of buffer pool
2021-04-23 0:32:35 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2021-04-23 0:32:35 0 [Warning] InnoDB: Retry attempts for reading partial data failed.
2021-04-23 0:32:35 0 [ERROR] InnoDB: Operating system error number 13 in a file operation.
2021-04-23 0:32:35 0 [ERROR] InnoDB: The error means mysqld does not have the access rights to the directory.
2021-04-23 0:32:35 0 [ERROR] [FATAL] InnoDB: Tried to read 65536 bytes at offset 38400, but was only able to read 0.Cannot read from file. OS error number 13.
210423 0:32:35 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Server version: 10.5.9-MariaDB-1:10.5.9+maria~focal
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=0
max_threads=153
thread_count=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 467871 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x0 thread_stack 0x49000
mysqld(my_print_stacktrace+0x30)[0x55587a6a00]
Printing to addr2line failed
mysqld(handle_fatal_signal+0x45c)[0x5558262e2c]
linux-vdso.so.1(__kernel_rt_sigreturn+0x0)[0x7f85f5f7c0]
/lib/aarch64-linux-gnu/libc.so.6(gsignal+0xe0)[0x7f855fa138]
/lib/aarch64-linux-gnu/libc.so.6(abort+0x110)[0x7f855e6d68]
mysqld(+0xcb4fd0)[0x5558644fd0]
mysqld(+0xbf3ba8)[0x5558583ba8]
mysqld(+0xbdbb4c)[0x555856bb4c]
mysqld(+0xbe1f8c)[0x5558571f8c]
mysqld(+0xbe71fc)[0x55585771fc]
mysqld(+0xbe77fc)[0x55585777fc]
mysqld(+0x610654)[0x5557fa0654]
mysqld(+0xb7b92c)[0x555850b92c]
mysqld(_Z24ha_initialize_handlertonP13st_plugin_int+0x78)[0x5558265ce8]
mysqld(+0x70f83c)[0x555809f83c]
mysqld(_Z11plugin_initPiPPci+0x860)[0x55580a0910]
mysqld(+0x646850)[0x5557fd6850]
mysqld(_Z11mysqld_mainiPPc+0x424)[0x5557fdc14c]
/lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0xe8)[0x7f855e7090]
mysqld(+0x641648)[0x5557fd1648]
The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ contains
information that should help you find out what is causing the crash.
Writing a core file...
Working directory at /var/lib/mysql
Resource Limits:
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size unlimited unlimited bytes
Max resident set unlimited unlimited bytes
Max processes unlimited unlimited processes
Max open files 1048576 1048576 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 30215 30215 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
Core pattern: core
Hi,
I am aware of issue #7 - but I can't get my pods "unstuck".
Default k8s wait time is 30s... which should be plenty.
The logs don't give me anything except:
Mo 26. Aug 18:05:30 CEST 2019 unmount /var/lib/kubelet/pods/851daafb-c819-11e9-9acb-0050569d4986/volumes/fstab~cifs/cifs
Every time I restart a pod or change it - the old one is stuck on "terminating".
The volume is still mounted in the path mentioned in the unmount command...
Any suggestions?
Thank you sincerely!
I wonder if this can be used with rancher and if yes, if there are some step by step instructions how to set it up via GUI?
Thx
The issue seems related to the fact that bash interprets exclamation marks because of the history expansion: https://superuser.com/questions/133780/in-bash-how-do-i-escape-an-exclamation-mark.
If I use a user that do not have an exclamation mark in their password, the command works correctly.
Basically volume plugin dir is not static, it depend on system itself. Some vendors may choose to change the directory due to various reasons. For example, GKE uses /home/kubernetes/flexvolume
, and RKE uses /var/lib/kubelet/volumeplugins
User can find the correct directory by running ps aux|grep kubelet
on the host and check the --volume-plugin-dir parameter
. If there is none, the default /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
will be used.
This is mainly FYI.
I can mount shares which use plain usernames, but when I try to mount a share where the user is a domain user, i.e., contains a backslash, I get Permission Denied
.
I escaped the backslash correctly, I double-checked it.
Is it even possible to mount shares with domain users?
GKE nodes does not support smb/cifs so i want to use this as command rather then attaching secrets and all stuff e.g.
mount -t cifs -o username=<win_share_user>,password=<win_share_password> //WIN_SHARE_IP/<share_name> /mnt/win_share
In case of large clusters updating the cifs script becomes tedious, I have created a daemonset for this. Its very basic but it updates cifs script on every host every 2 hours. Maybe include it in repository and update it to make the install procedure a container itself?
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fstab-cifs-updater
namespace: default
labels:
k8s-app: fstab-cifs-updater
spec:
selector:
matchLabels:
name: fstab-cifs-updater
template:
metadata:
labels:
name: fstab-cifs-updater
spec:
containers:
- name: fstab-cifs-updater
image: alpine
command: ["/bin/sh", "-c"]
args: ["apk add curl ; mkdir /flex/fstabcifs ; while true; do curl -o /flex/fstabcifs/cifs https://github.com/fstab/cifs/blob/master/cifs ; chmod a+x /flex/fstab~cifs/cifs ; sleep 7200; done"]
volumeMounts:
- name: flexvolume
mountPath: /flex
terminationGracePeriodSeconds: 30
volumes:
- name: flexvolume
hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
This plugin has been working great for months up until today where I tried to include a securityContext in the Pod spec that uses a pvc that is bound to a pv that is configured to use this plugin.
Example of what doesn't work (free written to give the gist of it)
apiVersion: v1
kind: Pod
metadata:
name: example-doesnt-work
spec:
securityContext:
fsGroup: 1000
containers:
- name: doesntwork
image: nginx
volumeMounts:
- mountPath: "/test-mount"
name: mounttotest
volumes:
- name: mounttotest
persistentVolumeClaim:
claimName: cifs-vol-created-using-flexvol-plugin-pvc
When I don't include the securityContext, everything works great per usual. The behavior I see is the pod gets stuck in a ContainerCreating state until the volume mounts timeout. There are no meaningful logs in kubelet, docker, or the Cluster logs. Does anyone have advice on what is happening or how to see what is happening?
Hello. I use cifs plugin and it work if i use as pod. I have decided as deployment, but it doesn't work
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: services
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
imagePullPolicy: IfNotPresent
volumeMounts:
- name: cifs-share
mountPath: /data
resources:
limits:
cpu: 300m
memory: 128Mi
requests:
cpu: "200m"
memory: 128Mi
volumes:
- name: cifs-share
flexVolume:
driver: fstab/cifs
fsType: cifs
options:
mountOptions: dir_mode=0755,file_mode=0644,noperm
networkPath: //server/data
secretRef:
name: cifs-secret
Status pod:
nginx-deployment-5864c58f6f-rbssq 0/1 ContainerCreating 0 8m4s
Describe pod:
Unable to mount volumes for pod "nginx-deployment-5864c58f6f-rbssq_ccqc-services(074e8f98-b98d-408c-9070-46864d460cf7)": timeout expired waiting for volumes to attach or mount for pod "ccqc-services"/"nginx-deployment-5864c58f6f-rbssq". list of unmounted volumes=[cifs-share]. list of unattached volumes=[cifs-share default-token-24bkz]
I'm following the guide to test mount CIFS in k8s pod. but got below error:
~> kubectl describe po busybox
Name: busybox
Namespace: default
Priority: 0
Node: sc25k8s2002/10.237.8.22
Start Time: Thu, 17 Dec 2020 09:35:52 +0000
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"busybox","namespace":"default"},"spec":{"containers":[{"command":["sl...
Status: Pending
IP:
IPs:
Containers:
busybox:
Container ID:
Image: busybox
Image ID:
Port:
Host Port:
Command:
sleep
3600
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
Mounts:
/tmp from test (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hpzrn (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test:
Type: FlexVolume (a generic volume resource that is provisioned/attached using an exec based plugin)
Driver: fstab/cifs
FSType: cifs
SecretRef: &LocalObjectReference{Name:cifs-secret,}
ReadOnly: false
Options: map[mountOptions:vers=2.1,dir_mode=0755,file_mode=0644,noperm networkPath://172.31.193.29/share_50983137_0abc_4d57_91ca_04c7c7c1207f]
default-token-hpzrn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hpzrn
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
Normal Scheduled default-scheduler Successfully assigned default/busybox to k8s2002
Warning FailedMount 11s (x5 over 21s) kubelet, k8s2002 MountVolume.SetUp failed for volume "test" : mount command failed, status: Failure, reason: cifs mount: failed to mount the network path: mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
Warning FailedMount 1s kubelet, k8s2002 MountVolume.SetUp failed for volume "test" : mount command failed, status: Failure, reason: cifs mount: failed to mount the network path: mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I can use "mount -t" to mount the CIFS in K8s nodes.
Does it due to dir "test' not created in /pods//volumes/fstab~cifs/ for some reason?
first, thanks your work, it saved us on some old CIFS, we patched to drop the permission change on mount in our case.
flex volumes are now deprecated. Do you plan conversion to CSI driver ?
I'm following the instructions but keep getting
E0709 11:41:16.026163 13823 desired_state_of_world_populator.go:289] Failed to add volume "cifs" (specName: "cifs") for pod "f31d6a76-a25f-11e9-bbc6-005056bb6d71" to desiredStateOfWorld. err=failed to get Plugin from volumeSpec for volume "cifs" err=no volume plugin matched.
The cifs script was manually copied to the directory mentioned in the document and I can confirm it is there and executable on all nodes. I'm running OpenShift Origin 3.9. Any idea why the flexVolume controller can't find the plugin?
I've also checked my configuration and volume-plugin-dir is not being defined. I guess it defaults to the subdirectory described in the document?
Regards,
Luiz
Trying to install plugin on kubernetes Docker for mac, I copy cifs bin to plugin dir on k8s_kube-controller but I'm not able to execute command cifs init
.
result is cifs not found
.
How to install it on docker for mac ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.