Comments (33)
This is still a problem, any updates? I also see a factor of 3 to 4 slowdown on MacOSX 10.15 with sshfs vs rsync when transferring large files.
(ska3) ➜ repro5_issues sshfs -V
SSHFS version 2.5 (OSXFUSE SSHFS 2.5.0)
OSXFUSE 3.10.4
FUSE library version: 2.9.7
fuse: no mount point
from sshfs.
This fix seems to be at least partly done in the source tree (https://github.com/libfuse/sshfs/blob/sshfs_2.x/sshfs.c#L1900 and https://github.com/libfuse/sshfs/blob/sshfs_2.x/sshfs.c#L3112) which check for __APPLE__
and set the blksize of the stat_t object to 0. Those improve performance quite a bit (from ~300 KB/s to 2.5 MB/s) but with one more change it can be doubled again (to 5.25 MB/s + which is just about what I get with sftp
directly):
At https://github.com/libfuse/sshfs/blob/sshfs_2.x/sshfs.c#L3865 change:
sshfs.blksize = 4096;
To:
#ifdef __APPLE__
sshfs.blksize = 0;
#else
sshfs.blksize = 4096;
#endif
(or combine with one of the other __APPLE__
checks).
from sshfs.
@vszakats I have been experiencing similar throughput issues on multiple macOs Sierra machines connecting to a local SSH server over 2.4 GHZ WLAN. I routinely get 6 MB/s+ using scp, but rsync over the local SSHFS mount was giving no more than 2.2 MB/s on any machine. I applied the patch you referenced and immediately saw the SSHFS speeds matching SCP, even without setting -o iosize
.
This is also with:
$ sshfs -V
SSHFS version 2.10
OSXFUSE 3.7.1
FUSE library version: 2.9.7
fuse: no mount point
from sshfs.
I see this bug is still open but no progress since March last year. Has there been any progress behind the scenes?
I experience a similar issue. I've tried two versions of sshfs (2.5-3 and 2.8-1) on F25. It acts as if something is rate-limiting the transfer as the speed is almost identical (to the kb/s) between tests. I have proven that connectivity and encryption aren't an issue as transfers work perfectly well (ie ~6MB/s) when I:
- SCP from source server to destination server
- SCP from destination server to source server
- SFTP from source server to destination server
- SFTP from destination server to source server
- Rsync over SSH from source server to destination server
- Rsync over SSH from destination server to source server
Now things get interesting. If I use SSHFS and download via rsync a file from the source server to the destination server (ie rsync -avhr /mnt/sshfs-MountPoint/file Destination directory), I get ~940kbs. However, if I reverse the direction (rsync -avhr source file /mnt/sshfs-MountPoint/sshfs-Destination/) it works perfectly.
There was someone else who seems to have experienced the same issue, however their thread on Ubuntu forums has closed. https://askubuntu.com/questions/879415/sshfs-speeds-upload-fast-download-is-slow
I'm using F25 with Fuse 2.9.7-1, kernel 4.9.6-200.
Strace output is as follows:
Full transfer speed: https://www.dropbox.com/s/z0nybg7803p8832/fasttransfer.txt?dl=0
Slow transfer speed: https://www.dropbox.com/s/9jjwho1179uwvqi/slowtransfer.txt?dl=0
Please let me know what other information you need.
from sshfs.
Hey,
wanted to follow up on this issue, since I am also experiencing speed issues. Especially when I compare speeds achieved using native sftp and sshfs. Here's my environment:
Local client on 200MBit/sec cable link
$ sshfs --version
SSHFS version 2.5
FUSE library version: 2.9.4
fusermount version: 2.9.4
using FUSE kernel interface version 7.19
Ubuntu Server 16.04.2, Linux 4.4.0-62-generic
OpenSSH_7.2p2 Ubuntu-4ubuntu2.1, OpenSSL 1.0.2g 1 Mar 2016
Remote Server on university 1 Gigabit Link:
Debian 9 Server, Linux 4.3.0-1-amd64 #1 SMP Debian 4.3.5-1 (2016-02-06)
OpenSSH_7.4p1 Debian-6, OpenSSL 1.0.2k 26 Jan 2017
I got a directory mounted from the server on my client using:
sshfs root@server:/home/example -p 13448 -o allow_other,ro,kernel_cache,cache=yes,reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,sshfs_sync,no_readahead -o uid=1000 -o gid=1000
Now testing shows, transferring the very same ubuntu iso test file:
sftp directly: up to 23MByte/sec -> almost my full 200MBit downlink bandwidth
sshfs copy: max 8,5 MByte/sec -> less than half of native sftp speed
I repeated the tests many times so they should be valid. Of course there may be situations when bandwidth varies on both sides due to other factors, but still. I never get more than 8,5 MByte/sec using sshfs and copying the test file.
So is this a confirmed bug, or might there be other options that improve performance?
As shown above I am on the ubuntu default package v2.5 of sshfs. Would a self compiled latest version help?
thanks
boba
from sshfs.
So according to osxfuse/osxfuse#389 maybe the sshfs st_blksize
should be configurable? I recompiled sshfs 2.9 with sshfs.blksize = 4096*1024;
instead of 4096
, and I'm at 55 MB/s using osxfuse 3.4.0. Setting sshfs.blksize = 0
in the code and using -o iosize=8388608
brings me back to ± 105 MB/s, which is fantastic. Not sure what other effects this iosize would have though.
from sshfs.
I can confirm that this patch (applied to https://github.com/libfuse/sshfs/releases/download/sshfs-2.10/sshfs-2.10.tar.gz as distributed via Homebrew):
diff --git a/sshfs.c b/sshfs.c
index fb566f4..d1c1fb2 100644
--- a/sshfs.c
+++ b/sshfs.c
@@ -3862,7 +3862,11 @@
}
#endif /* __APPLE__ */
- sshfs.blksize = 4096;
+#ifdef __APPLE__
+ sshfs.blksize = 0;
+#else
+ sshfs.blksize = 4096;
+#endif
/* SFTP spec says all servers should allow at least 32k I/O */
sshfs.max_read = 32768;
sshfs.max_write = 32768;
with this sshfs
option:
-o iosize=1048576
fixes the speed issue.
Instead of a zero default for macOS, it'd be nice to set it to a sane speedy value and let the iosize
option work at all times IMO. This would make it work as expected by default and would still let the value be fine-tuned at runtime.
UPDATE: 1048576 (1 MiB) seemed to be giving the best throughput in my tests. (on a 802.11n 2.4Ghz WLAN with a single 1.25 GiB file, 13 minutes (before) vs. 2 minutes (after) transfer times. Server was Debian ARM.)
$ sshfs -V
SSHFS version 2.10
OSXFUSE 3.7.1
FUSE library version: 2.9.7
fuse: no mount point
This means that db149d1 didn't resolve the issue for all cases. (Haven't tested the osxfuse version of the patch though — it has one extra hunk: osxfuse/sshfs@5c0dbfe)
from sshfs.
I am on Ubuntu 16.04 connecting to a Debian Stretch box across the Atlantic.
$ uname -r
4.13.0-38-generic
# the remote host is 4.15.11
$ ping remotehost
...time=156 ms
$ sshfs -V
SSHFS version 3.3.1
FUSE library version 3.2.1
using FUSE kernel interface version 7.26
fusermount3 version: 3.2.1
# the test file is 38MB of uncompressable data. larger files have the same performance
# EDIT: with sshfs. larger files perform even better with libssh2: 7.5x faster than sshfs!
$ time cp /mnt/sshfs/test /tmpfs/test
real 0m21.118s
user 0m0.010s
sys 0m0.096s
$ time scp remotehost:test /tmpfs/test
real 0m8.175s
user 0m0.310s
sys 0m0.456s
$ time sftp_using_libssh2.py remotehost:test # using the ssh2-python module
Finished file read in 0:00:04.816173
It seems like linking against libssh2 might be a good solution?
from sshfs.
This is not only a problem on macOS and also not an insulated issue.
In fact, in my experience, the only time where copying a large file can sustain very high network throughput is when it is over a very low latency connection.
400 Mbps connection with 20 ms ping → 10 MiB/s max
Is there anything that can be tweaked on other OS’s? (The iosize
thing seems to be an osxfuse-only option)
from sshfs.
That report is rather long and convoluted. If there's a way to create a simple(!) C program that reads (or writes) data from a file and is slow over sshfs but fast without it, that would much improve the changes of figuring out what happens here.
from sshfs.
not C code, but simple enough to try.
from sshfs.
I get the same speed for rsync and scp.
$ scp ebox:bigfile .
bigfile 100% 5120KB 640.0KB/s 00:08
$ sshfs ebox: mnt/
$ rsync -ah --progress mnt/bigfile dst/
sending incremental file list
bigfile
5.24M 100% 662.07kB/s 0:00:07 (xfr#1, to-chk=0/1)
from sshfs.
@Nikratio is there some thing I can do on my end with sshfs to provide helpful diagnostic logging information?
from sshfs.
Trying to narrow down the problem (try different servers, and different client versions) is always a good first step. Providing ssh access to a server where you experience the slowdown (so that others can attempt to reproduce the issue) may also help. Finally, you could try running with strace -T -tt
(worst), with profiling (-pg
compile option, better) or with oprofile (best). Ideally, you would gather two profiles: one where the problem appears, and one where it does not.
from sshfs.
Well, I think my message from Mar 15 (right above yours) should answer your question about what information is needed. That said, I unfortunately have very little time for sshfs at the moment so even with that information it may take a while.
from sshfs.
Hi,
I believe I have supplied the information requested in your post on Mar 15. ie:
- Multiple clients and servers
- Multiple versions of sshfs
- Two strace profiles (one where I have the problem and one where I don't)
I did not offer SSH access as this is a public forum but if anyone would like, they're welcome to contact me directly.
from sshfs.
Testing different installs I've found that the problem does not occur anymore if I downgrade osx installs to version SSHFS 2.5 + FUSE 2.7.3 / OSXFUSE 2.8.0 (previously at FUSE 3.5.5, also see it at 3.5.6).
from sshfs.
Could you to explicitly enable/disable the nodelay workaround (-o workaround=[no]nodelaysrv,[no]nodelay
, 4 combinations) and report if that affects performance? You may have to recompile with --enable-sshnodelay
)
from sshfs.
Got sshfs 2.9 from release
$ ./sshfs --version
SSHFS version 2.9
OSXFUSE 3.6.0
FUSE library version: 2.9.7
fuse: no mount point
compiled with ./configure --enable-sshnodelay and tested all 4 options by each time mounting the remote disk and rsync --progress copying a file from remote to local. It remains at a speed about 8 times lower than when I do a straight rsync over ssh (±10 MB/s vs ±80 MB/s)
from sshfs.
Thanks! What speed do you get with scp?
from sshfs.
About 100-110 MB/s
from sshfs.
It seems that for @boba23 downgrading to sshfs 2.5 did not fix the issue, yet for @ibaun downgrading to sshfs 2.5, FUSE 2.7.3, and OSXFUSE 2.8.0 did fix the issue.
@boba23: could you please try to downgrade FUSE/OSXFUSE as well, and see if that fixes the problem?
@ibaun: could you please try to downgrade only sshfs, and see if the problem still occurs?
from sshfs.
Besides the different versions of the code used, guess people should also tell the latency of the network link used. Things tend to get slow if there are frequent roundtrips over a (relatively) high latency link.
Also, if network throughput is high and latency is high, tcp/ip might need some tuning for optimal performance.
from sshfs.
I downgraded sshfs to 2.5 using a build from source, but can't seem to get a connection to the server with that version.
$ ./sshfs --version
SSHFS version 2.5
OSXFUSE 3.6.0
FUSE library version: 2.9.7
fuse: no mount point
Mount is successful, but 'ls' on the mount point hangs indefinitely.
(well, not a system wide downgrade, but running from the source dir, does that make a difference?)
@ThomasWaldmann this is on a local link, only a local switch in between. If ping is a valid way to measure the latency:
$ ping 192.168.0.170
PING 192.168.0.170 (192.168.0.170): 56 data bytes
64 bytes from 192.168.0.170: icmp_seq=0 ttl=64 time=0.446 ms
64 bytes from 192.168.0.170: icmp_seq=1 ttl=64 time=0.227 ms
64 bytes from 192.168.0.170: icmp_seq=2 ttl=64 time=0.231 ms
64 bytes from 192.168.0.170: icmp_seq=3 ttl=64 time=0.180 ms
from sshfs.
Grmbl. Ok, can you try to stay at sshfs 2.9 but downgrade oxfuse to 2.8.0 (this should also change the reported FUSE library version)?
from sshfs.
The probleem seems to be caused by the osxfuse version. I've downgraded sshfs to 2.5 successfully using a precompiled binary and that didn't help. The only thing that helped was indeed putting osxfuse back to 2.8.0.
It seems that the slowdown is caused between osxfuse 2.8.3 and 3.0.4. 2.8.1 seems to be the fastest, at 2.8.2 the speed is a little lower already, at 3.0.4 the speed drops completely.
To recap:
- scp speed : 110 MB/s
- rsync from sshfs 2.9.0 + osxfuse 2.8.1: ± 55 MB/s
- rsync from sshfs 2.9.0 + osxfuse 2.8.2: ± 50 MB/s
- rsync from sshfs 2.9.0 + osxfuse 2.8.3: ± 50 MB/s
- rsync from sshfs 2.9.0 + osxfuse 3.0.4: ± 10 MB/s
from sshfs.
Alright, that's progress. So although the symptoms are similar, it seems there are two different problems at play here. @ibaun, could you report your problem at https://github.com/osxfuse/osxfuse/issues? @boba23: you are using Linux, right? So we need to do some further debugging. As a first step, could you confirm that the problem is still present in the most recent sshfs?
from sshfs.
Benjamin, could you take a look at this (OS-X specific) proposed change? It seems to increase performance a lot. Thanks!
from sshfs.
@bfleischer ping. Could you take a look?
from sshfs.
As discussed in issue #84, when this is implemented care needs to be taken to avoid a division by zero in sshfs_statfs().
from sshfs.
@vszakats @niharG8 Help... patch did not work for me... still slow transfer speed (400KB/s instead of 2MB/s)
$ sshfs -V
SSHFS version 2.10.0
OSXFUSE 3.7.1
FUSE library version: 2.9.7
fuse: no mount point
from sshfs.
@bfleischer ping. Could you take a look?
from sshfs.
Any news about this issue ?
from sshfs.
Related Issues (20)
- QT Creator fails to save file HOT 1
- Win-SSHFS don't allow me to point a remote /mnt/foo dir HOT 2
- `dir_cache` sets `d_type` for only 1 request every 20 seconds HOT 2
- Slow Performance with Debian 11 HOT 3
- sshfs-mounts created during startup/boottime get unmounted at login time HOT 1
- rewinddir() does not work as expected HOT 2
- Keeps system from suspending HOT 3
- Does SSHFS works with Open-SSH on Windows when using key-pair auth? HOT 1
- mtime adjusted when atime should be adjusted on some bsd instances
- sshf with no mount point given seg faults 11 HOT 1
- Config defaults in manpage HOT 2
- when mounting fails while using setuid=user,drop_privileges, the mountpoint ends up in an unusable state
- sshfs not mounting when using GPG auth key on YubiKey HOT 2
- Slow seeking through large file HOT 1
- Allow to set global config in /etc/sshfs.conf HOT 2
- could sshfs use the port specified in .ssh/config file? HOT 1
- project status? HOT 2
- idmap=user doesn't work when the user doesn't own their home directory, or the SSH server starts the client outside their home directory. HOT 1
- README.rst may make one believe that the project is orphaned HOT 2
- sshfs client memory consumption HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from sshfs.