Comments (17)
Exception info, if helpful:
Backtrace on logger:
2011-08-26 08:52:37,406 30818 paramiko.transport ERROR Exception: Remote transport is ignoring rekey requests
2011-08-26 08:52:37,407 30818 paramiko.transport ERROR Traceback (most recent call last):
2011-08-26 08:52:37,407 30818 paramiko.transport ERROR File "build/bdist.linux-x86_64/egg/paramiko/transport.py", line 1524, in run
2011-08-26 08:52:37,407 30818 paramiko.transport ERROR ptype, m = self.packetizer.read_message()
2011-08-26 08:52:37,407 30818 paramiko.transport ERROR File "build/bdist.linux-x86_64/egg/paramiko/packet.py", line 378, in read_message
2011-08-26 08:52:37,408 30818 paramiko.transport ERROR raise SSHException('Remote transport is ignoring rekey requests')
2011-08-26 08:52:37,408 30818 paramiko.transport ERROR SSHException: Remote transport is ignoring rekey requests
2011-08-26 08:52:37,408 30818 paramiko.transport ERROR
Backtrace on exception:
File "build/bdist.linux-x86_64/egg/paramiko/sftp_client.py", line 573, in put
fr.write(data)
File "build/bdist.linux-x86_64/egg/paramiko/file.py", line 314, in write
self._write_all(data)
File "build/bdist.linux-x86_64/egg/paramiko/file.py", line 435, in _write_all
count = self._write(data)
File "build/bdist.linux-x86_64/egg/paramiko/sftp_file.py", line 165, in _write
t, msg = self.sftp._read_response(req)
File "build/bdist.linux-x86_64/egg/paramiko/sftp_client.py", line 667, in _read_response
raise SSHException('Server connection dropped: %s' % (str(e),))
SSHException: Server connection dropped:
from paramiko.
Chris, ironically, I ran into this issue today as well.
What value did you end up picking instead of 20 packets?
from paramiko.
The value is entirely dependent on your situation, but basically, it goes like this:
- Determine your bandwidth between end points. Run scp with a large files (>1G) to get a good average.
- Determine your "round trip time" (rtt). Run ping from one end point to the other. It'll provide info on this.
- Note that most of the "packets" paramiko is using are 24k.
- Estimate the number of packets that go out from this side in the time it takes one rekey packet to come back from the other side. Note this is a lower bound since it assumes that the remote side instantly puts a rekey packet out on the wire the moment it receives one.
num_packets = (bandwidth_MB * 1024) / 24 * rtt_msec * 1000 / 2
The factor of two gets it down from a round trip to a one-way trip time. The 1024 converts from MB (which bandwidth is in) to KB, which the packet size (24) is in. The 1000 converts from milliseconds (which ping reports for rtt) into second, which everything else is in. - Take num_packets and multiply it by some safety margin, at least 2 or 3x. I believe the downside here is that paramiko will buffer more packets the larger the number is, but it's been a while since I read the code. Unless you're heavily memory constrained, you can get away with a large number here.
from paramiko.
I was looking for a ball park. I don't have the luxury of doing this calculation. I am using paramiko on the server-side, so my clients will all have different optimal values. I am using paramiko as an SSH server to expose access to rsync. The client I am using is OpenSSH.
First I tried 120, I got to 3GB transferred before dying.
Now I am trying 1024. We will see how that works.
I reviewed the code of dropbear SSH server. It disallows sending of packets during kex rekeying. During rekeying, it enqueues packets to a linked list. No limit is enforced (except by available memory). The relevant code is in packet.c see:
enqueue_reply_packet()
maybe_flush_reply_queue()
encrypt_packet()
The queue is activated when ses.dataallowed == 0. kex.c handles rekeying and sets the dataallowed member to 0 after sending/receiving a rekey request.
The OpenSSH implementation is similar. packet.c enqueues outbound packets in packet_send2() when active_state->rekeying == 1.
My thoughts are that the 20 packet receiving limit should simply disappear. I don't see any similar limits on receiving packets in either implementation. I also don't see any queuing of outbound packets in paramiko, just a difference, not sure if that is a problem or not.
from paramiko.
New version of duplicity (http://duplicity.nongnu.org/) uses paramiko as a transport backend and seem to suffer from this bug also. It will normally crash at some moment during the full backup with a following stacktrace:
ssh: Exception: Remote transport is ignoring rekey requests
ssh: Traceback (most recent call last):
ssh: File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 1524, in run
ssh: ptype, m = self.packetizer.read_message()
ssh: File "/usr/lib/python2.7/dist-packages/paramiko/packet.py", line 378, in read_message
ssh: raise SSHException('Remote transport is ignoring rekey requests')
ssh: SSHException: Remote transport is ignoring rekey requests
ssh:
BackendException: sftp put of /tmp/duplicity-3KymQx-tempdir/mktemp-3Ehncv-42 (as duplicity-full.20120229T160640Z.vol41.difftar.gpg) failed: Server connection dropped:
Any ideas for a real fix to this (instead of enlarging the number of packets allowed to receive)?
from paramiko.
The problem boils down to a sort of timeout (measured in packets received, not seconds) that is too short. The options are then to remove the timeout or make it longer, perhaps by some fancy algorithm. Rekeying is part of the ssh protocol and is there for security reasons. So any "real fix" necessarily involves enlarging the number of packets allowed.
That's not to say we can't do better than a hard coded limit. Something that auto-tunes based on the formula above is probably a decent idea. I'd also be curious what the openssl codebase does since it does not suffer from this issue as far as I can tell.
from paramiko.
One way to fix this, without disabling transmission during re-keying and without the need for a complex (and possibly error-prone) time-based algorithm, would be to initiate the re-keying really early (at 500 MB, for example) and then raise the exception after 1 GB has been received like we do now. This weekend, I'll see if I can put together a patch that does this.
from paramiko.
This patch seems to work: #63
Anyone want to try it out?
from paramiko.
This patch seems to work: #63
Anyone want to try it out?
Works for me; with it, my duplicity backups manage to get past 1GB without throwing an exception.
from paramiko.
I think that this is related to this Twisted Conch bug http://twistedmatrix.com/trac/ticket/4395
from paramiko.
So what's the process for getting this merged to master?
from paramiko.
Is this still an issue with the latest release v1.15.2?
I think this can be marked as closed now.
from paramiko.
Given the age, it's at least a little likely, @simon-online - though I know there are still outstanding issues re: "large" file transfers (also with their own PRs, though...). Not sure if same or different.
Overall, we really need a thorough cleaning of the ticket queue (even if it is simply "close anything w/o activity in the last year" on the assumption that critical issues will get re-upped). Something I hope to find time for in the mid term.
from paramiko.
For what it's worth I can confirm that I was regularly having this issue with v1.7.7.1 but since I updated to v1.15.2 over a month ago this issue has stopped.
from paramiko.
Thanks a lot for double checking, @simon-online!
from paramiko.
For what it's worth, i'm on 2.12.0 and the problem is (still?) present. The workaround provided here #151 (comment) worked wonders for me.
from paramiko.
If you're still observing this problem, @fthiery, then likely your root cause is different from that discussed in this ticket, at least to some degree.
Please test also on the most recent 3.x release, and if you're seeing the problem there open a new ticket to report as such. Thanks!
from paramiko.
Related Issues (20)
- Fix typo in transport.py: "the the window" repeated
- [Question] - <Changing default use of SHA-1 for new standards>
- [BUG] - Keywords for "Match" in ssh_config are case-sensitive, so SSHConfig skips "Match" line and applies everything inside like Host *
- Update cryptography dependency to 42.0.0 or higher HOT 2
- Paramiko processing of SSH_MSG_CHANNEL_EXTENDED_DATA
- [SUPPORT] - Getting RSA key from Azure Vault into paramiko - Not working
- [BUG] - SSHClient.open_sftp().open(...) cause file downloaded being modified HOT 2
- invoke_shell Dynamically adjust window size HOT 2
- [BUG] - After establishing an SSH connection, when a List[SomeClass] exists, the interpreter does not perform memory recycling (execute __del__).
- [SUPPORT] - <title>resume transfer
- [BUG] - Key discovery fails on root
- [BUG] - paramiko recv() function not reading the psql prompt when ssh to a windows and psql command is sent HOT 18
- Locking remote ssh host with paramiko session/channel id HOT 5
- [BUG] - _parse_service_accept: Unexpected packet types are ignored (code suggests an exception was intended)
- Accept Path objects for remote file name in sftp.put HOT 2
- [SUPPORT] - Use Paramiko inside docker container
- [SUPPORT] - SSHClient.close() or SSHClient.exec_command("exit") doesn't close ssh session on macOS
- [BUG] - Paramiko doesnt work with OpenSSH 7.8 HOT 1
- [BUG] - Sends duplicate userauth-request - fails with AsyncSSH 2.14.1 and multiple SSH keys
- [BUG] - Transport::join_lingering_threads() not stopping all threads and join thread never get executed in Transport::stop_thread() HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from paramiko.