openssh-rust / openssh Goto Github PK
View Code? Open in Web Editor NEWScriptable SSH through OpenSSH in Rust
License: Apache License 2.0
Scriptable SSH through OpenSSH in Rust
License: Apache License 2.0
How do I go about this the right way? I would like to see an example such that it reads stdio and writes the data to a file in real time.
Here's my current thought process on how I might do the above.
The same as arc_command, but for shell.
I did this very easily via an extension trait, but it should be available out-of-the-box imo considering arc_command and arc_raw_command are there:
trait ArcShellExt {
fn arc_shell<S: AsRef<str>>(self: Arc<Self>, command: S) -> OwningCommand<Arc<Self>>;
}
impl ArcShellExt for Session {
fn arc_shell<S: AsRef<str>>(self: Arc<Self>, command: S) -> OwningCommand<Arc<Self>> {
let mut cmd = self.arc_command("sh");
cmd.arg("-c").arg(command.as_ref());
cmd
}
}
Hi. I have a project where I'm trying to control ssh(&maybe sftp) from a Rust parent process, and I'd love to use your library.
However, the library seems very opinionated about passing -S
etc options.
How would you feel about an API that lets ssh
just do what it is configured to do, instead of forcing its hand so much? Right now, the library won't e.g. let me reuse a master connection that was opened from elsewhere.
Hi
First of all, this crate is really awesome in that it can reuse all the ssh configuration ~/.ssh/config
and features it have.
Thank you for implementing this!
After I realized that this crate communicates with the ssh multiplex server by creating a ssh
process and thus having problem handle errors reliably, I have been thinking of writing a Rust crate that can communicate with the ssh multiplex server directly.
Fortunately, ssh does have a document on its multiplex protocol and I was able to implement openssh-mux-client that acts as a ssh multiplex client in pure Rust.
I have written a few test cases to make sure the
are working as intended, while features
are implemented but not tested.
There are also two features that I didn't implement:
While it is extremely likely there are bugs in my code, I think it is ready for testing.
Thus, I think it would be great if openssh-rs
could add a feature that enables it to use my crate to communicate with the ssh multiplex server directly, which would help provides better error message and would avoid the unnecessary process creation.
This features exists in bossy::Command here: https://docs.rs/bossy/latest/bossy/struct.Command.html#method.with_args
It is quite useful, allowing you to write this:
let kill_cmd = session.command("/root/.cargo/bin/nu").withargs([
"-c",
r##"ps | filter {|ps| $ps.name == "surreal"} | get pid | each {|pid| kill $pid }"##,
]);
... instead of this:
let mut kill_cmd = session.command("/root/.cargo/bin/nu");
kill_cmd.args([
"-c",
r##"ps | filter {|ps| $ps.name == "surreal"} | get pid | each {|pid| kill $pid }"##,
]);
This saves using the mut
keyword and a new line, and can even be implemented trivially using an extension trait
On Macbook Air 2020 with MacOS 12.3.1
, I got the following error when running check.sh
and run_ci_tests.sh
:
failures:
---- no_route stdout ----
Custom { kind: ConnectionAborted, error: "connect to host 255.255.255.255 port 22: Address family not supp
orted by protocol family" }
thread 'no_route' panicked at 'assertion failed: `(left == right)`
left: `ConnectionAborted`,
right: `Other`', tests/openssh.rs:676:17
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
no_route
The test in question is no_route
.
The error is created using Error::interpret_ssh_error
.
There seems to be a codecov uploading error in the CI:
[2023-03-20T13:13:27.169Z] ['info'] Pinging Codecov: https://codecov.io/upload/v4?package=github-action-3.1.1-uploader-0.3.5&token=*******&branch=master&build=4468562670&build_url=https%3A%2F%2Fgithub.com%2Fopenssh-rust%2Fopenssh%2Factions%2Fruns%2F4468562670&commit=145085b848dffef4160bce864ae4aafbfb1a2151&job=coverage&pr=&service=github-actions&slug=openssh-rust%2Fopenssh&name=&tag=&flags=&parent=
[2023-03-20T13:13:27.574Z] ['error'] There was an error running the uploader: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 404 - {'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}
Error: Codecov: Failed to properly upload: The process '/home/runner/work/_actions/codecov/codecov-action/v3/dist/codecov' failed with exit code 255
@jonhoo Could it be some of the token you setup expired?
I am a bit ignorant when it comes to ssh stuff, but I have been looking into it a lot lately. This tool is awesome but unfortunately I have no control of how the authentication is setup in the environment that I am trying to use this. Are there any plans to support password based authentication in this crate? Is there a reason that it isn't? Or is it just something that you aren't interested in doing?
Thanks!
Is it possible to setup port forwarding along with this crate? I see in your source code that you're just calling the ssh command, which is perfect because then we can just add a -R 80:localhost:80
or a -L 80:localhost:80
to the ssh command based on what ports needs to be forwarded.
Would you be open to implementing that? Additionally, just out of curiosity, I see in other issues that you've not found time to work on this recently. As of today, does the library still work or are there any pitfalls in the library that prevent it from working for a production use?
In crate documentation of openssh 0.9.0-rc.1:
And finally, our commands never default to inheriting stdin/stdout/stderr, since we expect you are using this to automate things. Instead, unless otherwise noted, all I/O ports default to Stdio::null.
This is inconsistent since now stdin/stdout/stderr in Command::spawn
and Command::status
is default to inherit.
And the link to Stdio::null
is also wrong.
Using version 0.8.1, creating a connection does not seem to work when on Winodws 11.
When executing the following line of code:
let session = Session::connect(format!("root@{}", SERVER_IP.to_string()), KnownHosts::Accept).await?;
I either get the following output, no folder in root of my project and the process terminates:
Error: Connect(Custom { kind: ConnectionAborted, error: "getsockname failed: Bad file descriptor\r\nssh_dispatch_run_fatal: Connection to UNKNOWN port -1: unexpected internal error" })
Or I do not get any output and the process "hangs" at this line.
The library then created a folder in the root of my project named ".ssh-connection***" before waiting indefinitely.
Is it possible to have a Session
within another Session
?
26 of the 27 test pass, excluding connect_timeout
.
instead of resulting in a timeout, ssh
seems to fail with:
"connect to host 192.0.0.8 port 22: Network is unreachable"
when changing the test's host ip e.g. to the google dns 8.8.8.8
it works.
Hi,
I'm using the native-mux implementation for my project.
openssh = { version = "0.9.9", features = ["native-mux"], default-features = false }
...
[profile.release]
panic = "abort"
let session = SSHSession::connect_mux(&host.hostname, KnownHosts::Add)
.await
.unwrap_or_else(|_| panic!("{} Failed to connect to host.", host));
Sometimes I mistype the hostname and connection rightfully fails, but perhaps because I have panic = "abort"
, the .ssh-connectionXXXXX
directory (located under the current directory) is not removed. I'm on MacOS. I have panic = "abort"
because I create SSH session inside an async task, and I want the program to terminate when a connection fails.
It seems like I can set the control_directory
to point at somewhere like /tmp
where it doesn't really matter, but since /tmp
is shared, it'll be great if there's a native way to seamlessly clean up.
Thanks!
Hey,
Every time I start a ssh connection it creates a .ssh-connectionXXXXX folder (where X are replaced by random chars). How could I disable that feature, or make sure that folder is removed afterwards?
The Session::request_port_forwarding
documentation states the following:
Currently, there is no way of stopping a port forwarding due to the fact that openssh multiplex server/master does not support this.
In ssh, you can cancel port forwarding for multiplexed connections like this:
ssh -S [path_to_multiplexing_socket] -O cancel -L/R [forwarding_addresses]
So I don't see why this couldn't be implemented?
Maybe I'm missing something. Will implement it myself if no one picks up the ticket or tells me why it won't work. :)
It would be useful to have a blocking wrapper around this library, for use in blocking code.
It could be built in a similar fashion as that of reqwest
. It could be a feature, or an external crate.
Currently, I have implemented a Session object myself. Can these two methods be opened for external self control calls.
Make new_process_mux
or new_native_mux
public .
mpl Session {
#[cfg(feature = "process-mux")]
+ pub(super) fn new_process_mux(tempdir: TempDir) -> Self {
Self(SessionImp::ProcessImpl(process_impl::Session::new(tempdir)))
}
#[cfg(feature = "native-mux")]
+ pub(super) fn new_native_mux(tempdir: TempDir) -> Self {
Self(SessionImp::NativeMuxImpl(native_mux_impl::Session::new(
tempdir,
)))
}
}
Prior to version 0.9.0, openssh
had a very nice feature that it matched very closely tokio::process::{Command, Child, ChildStdin, ChildStdout}
. So it was very easy to work with both local and remote commands by spawning them in a function and returning the respective stdin
/stdout
for the code to interact with.
Since 0.9.0
, this no longer "just works". openssh::Command::spawn
is now async
; tokio::process::Command::spawn
isn't. ChildStdin
/ChildStdout
are now opaque types, so they cannot be returned in the same places as their equivalents in tokio::process
.
What are the reasons for these changes? And are there any straightforward workarounds? (I'm a beginner in Rust, so I may be missing something very basic.)
Since the 0.9.0-rc1 has been out for one month and there isn't any bug report for it, I think it is reasonable to prepare for a 0.9.0 release.
Tasks:
Scp
). #50ChildStdin
, ChildStdout
and ChildStderr
opaque wrappers instead of concrete type and only expose trait Unpin
(since tokio::process::Child*
is Unpin
), AsyncWrite
and AsyncRead
. (It would be much easier if rust has stablised impl type alias). #53Session::control_socket
available only on unix or makes it return Option<&Path>
. It exposes implementation details and ssh on windows does not support session multiplex, so this API would not present on Windows. #52Currently testing locally, the historical control_directory directory will always exist during each debugging session.
E.g:
➜ ll -als
total 0
0 drwxr-xr-x 6 root staff 192B 7 28 10:03 .
0 drwxr-xr-x 3 root staff 96B 7 27 18:25 ..
0 drwxr-xr-x 4 root staff 128B 7 28 09:55 .ssh-connectionpCn9J7
0 drwxr-xr-x 2 root staff 64B 7 28 10:01 .ssh-connectionpHq9a2
0 drwxr-xr-x 2 root staff 64B 7 28 10:03 .ssh-connectionpJp983
0 drwxr-xr-x 2 root staff 64B 7 28 10:05 .ssh-connectionpLwp24
Using Windows 11, RC version 0.9.0-rc.1 fail to compile, as tokio-pipe seems to fail to compile correctly on Windows.
See: yskszk63/tokio-pipe#23
Instead, it is using shell commands.
This makes the tool incompatible with SFTP-only servers.
It's unclear when looking at the documentation. It probably shouldn't be called SFTP.
russh
is a rust implementation of ssh
.
It's much better than using external ssh
to remove dependency on having ssh
in environment and ssh
also contains a lot of terrible coding practices (global states for arguments and program states) and written in C, so having a Rust implementation is preferred.
I think we could support this by having a feature and a new set of Session(Builder)::new_russh
for this.
To keep backwards compatibility, I think we should make openssh::Error
opaque and instead have an opaque type with Error::kind()
for inspecting the error kind.
It's probably not always a good idea to leave a temporary file in the working directory. Is there a reason why this is not in the tmp folder by default (std::env::temp_dir for example)?
Right now it seems that openssh::Session::connect_mux
does not pass SSH_AUTH_SOCK
environment variable to the child SSH process, breaking all setups that depend on an SSH agent.
This is exhibited by getting a PermissionDenied error when trying to establish a connection Connect(Custom { kind: PermissionDenied, error: "pin required\r\[email protected]: Permission denied (publickey)." })'
while the same host does not prompt for anything when using ssh
standalone.
I'm executing remote commands through a Session using Session::command()
But I don't find a way to set the current directory for a given command. Even the cd
command does not keep the directory changed for subsequent commands.
I am looking for something like std::process::Command::current_dir()
method.
Is there any way to do this ?
Hi! Thanks as always for the great crates (and videos).
I'm writing a multi-node command runner built around this crate (https://github.com/jaywonchung/pegasus), and I have the need to stream both the stdout and stderr of commands to the user.
My take on streaming the stdout is like this:
https://github.com/jaywonchung/pegasus/blob/a92f27a48d7a768f5d6d9cc4e304a7c917010749/src/session.rs#L49-L77
(Basically I'm using AsyncBufReadExt::fill_buf
to fetch characters.)
Now, building on top of this, I'd like to stream stdout together with stderr. I thought in theory stdout and stderr are separate, it should be safe to create a replica async fn stream_stderr(&self, stdout: &mut ChildStderr)
and join them like:
futures::future::join(
stream_stdout(process.stdout().as_mut().unwrap()),
stream_stderr(process.stderr().as_mut().unwrap()),
).await
but obviously this erred with the borrow checker complaining about process
being mutably borrowed twice.
Approaches that failed:
process.channel.take()
: Failed because the field channel
is private.let openssh::RemoteChild { session, channel } = process
: Failed because tokio::process::Child
doesn't (cannot?) implement Copy
.Would there be a way to get around this?
Thanks.
I have a code that calls Session::connect(...).await
. It runs fine most of the time, but fails with "failed to connect to the remote host", when run from inside a certain directory (or its children). (I.e., the only thing that changes is the current working directory.) I don't see anything special about that directory, but perhaps I'm missing something. Any pointers on what this could be, or how to go about debugging it?
Is there have an example of connect on SFTP username and password?
I have a central server that is authenticated with multiple machines and I need to use it as an intermediate jump host. Is this option exposed by any chance?
Hey there! I've been looking for a project like this, it looks awesome!
I was wondering if the Drop implementation for Session should call Session::close
or Session::terminate
. I noticed that the tsp
example has a manual session.close()
in it, would it be more "rusty" to take advantage of RAII instead?
Was trying to do a curl and notice that the args would fail. turns out the shell-escape is adding a few more single/double quotes.
try something like this and review the data structure. I think you need to use a different crate as shell-escape hasn't been updated in years.
let cmd = dbg!(session.command("curl")
.arg("-H 'Auth'")
.arg("some_host_ip)
.arg(-o 'stuff.txt')
)
.output()
.await()
.unwrap();
I have code that's roughly like follows:
// …
let std::process::Output {
status,
stderr,
stdout,
} = c.spawn().await?.wait_with_output().await?;
// …
let content = &stdout;
tracing::warn!(
"daily content is {} long and ends with {:?}",
content.len(),
content.iter().rev().next()
);
// …
The command being ran on the remote produces relatively large amount of output on a single line. When I'm using native-mux
feature, this output is incomplete!
With native-mux
I see:
2022-10-25T05:43:55.091989Z WARN start_sim::live: daily content is 147456 long and ends with Some(110)
but if I change to process-mux
and change connect_mux
to connect
, I see
2022-10-25T05:42:40.638584Z WARN start_sim::live: daily content is 410777 long and ends with Some(10)
The process-mux
is correct.
I don't have minimum reproducible example at hand but I suspect one can just create a large file (say 1MiB) on a remote and spawn cat large-file
via openssh
and reproduce the issue.
I tried with 0.9.7, same result.
Hi!, I'm using this crate to forward a remote socket like this
session
.request_port_forward(
ForwardType::Local,
SocketAddr::from_str("127.0.0.1:0").unwrap(),
Path::new("/var/run/docker.sock"),
)
.await
.unwrap();
I'd like to forward to 127.0.0.1:0
to let the os pick any free port,
Is it then possible to get the port number? Thanks!
For example, I need to reboot my servers before I start a new batch experiments. Is it possible issue a reboot from shell and then disconnect the local ssh handler? Also, is it possible to run things in parallel? I have tried to use things like tokio::join but it doesn't seem to work nicely here. Thx!
Hi,
I saw the issue #68, and from documentation, it is not possible to use username and password for the session, only key-pairs. Would it be possible to add this authentication as this ssh2 crate did?
Thank you for your attention!
jmmb
From openssh-rust/openssh-sftp-client#65 (comment) :
If it possible for Sftp to hold a session and spawn command (which means hold on RemoteChild) internally? So that users don't need to hold session, remote_child and Sftp at the same time. To avoid RemoteChild been dropped earlier than Sftp, users like opendal have to use owning_ref tricks.
openssh_sftp_client::Sftp
now has a new method new_with_auxiliary
to hold any data and @Xuanwo & @silver-ymz is currently using a workaround I proposed by using a Box
ed future.
But I think it's still better to have a RemoteChildOwned
to avoid the self-ref issue, it's pretty bad for anybody using it with openssh_sftp_client
.
Alternatively, we can wait for the new proxy-mode to openssh-mux-client openssh-rust/openssh-mux-client#6 , which will enable you to use stdin/stdout/stderr while dropping Session and RemoteChild.
It will also support windows.
Unfortunately, it was blocking on tokio_util::sync::WaitForCancellationFutureOwned
, so I switched to other projects and I kind of forgot it.
I will restart working on it, but it will takes quite some time before it's ready to use.
When restarting the sftp service, SSH authentication fails. The warning logs: WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
The detail log:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ED25519 key sent by the remote host is
SHA256:4/usiLYoGeiUTd5NMxpS3Snm7gR5Vkudk4lnGR/5CS8.
Please contact your system administrator.
Add correct host key in /Users/baoyachi/.ssh/known_hosts to get rid of this message.
Offending ED25519 key in /Users/baoyachi/.ssh/known_hosts:32
Host key for [0.0.0.0]:2222 has changed and you have requested strict checking.
Host key verification failed.
Connection closed
Connection closed.
So, how to manage ~/.ssh/known_hosts file when sftp-server restart, that authentication fails.
Seems it depends on openssh-mux-client which depends on tokio-io-utility ^0.6.4 which has been yanked: https://crates.io/crates/tokio-io-utility/versions
A simple setup like this:
[dependencies]
openssh = { version= "0.9", default_features = false, features = ["native-mux"] }
tokio = { version = "1", default_features = false, features = ["rt-multi-thread"] }
futures = { version = "0.3", default_features = false }
Shows the issue:
[nix-develop]$ rm Cargo.lock
[nix-develop]$ cargo check
Updating crates.io index
error: failed to select a version for the requirement `tokio-io-utility = "^0.6.4"`
candidate versions found which didn't match: 0.7.1
location searched: crates.io index
required by package `openssh-mux-client v0.15.0`
... which satisfies dependency `openssh-mux-client = "^0.15.0"` of package `openssh v0.9.0`
... which satisfies dependency `openssh = "^0.9"` of package `start-engine v0.1.0 (/home/shana/programming/start-engine)`
Hi! I have a long running command on a remote, and I would like to have the ability to terminate the remote process as well - in my case triggered by a timeout on the local side. As stated in the documentation, when the ChildProcess or the connection is dropped, only the local part of the connection finishes, which is consistent with the normal ssh behavior for command exec.
Now, I haven’t tried it yet with this library, but if we had the ability to enforce a PTY, closing the connection would close the remote execution as well. But, this is currently explicitly disabled through the -T flag.
I would be happy to work on a fix for this. My question is, given that this is explicitly disabled (although if I recall for ssh exec the -T is implied), what is the reason to enforce it this way? Would it be possible to have another command execution that would run it inside a PTY? Would it make more sense to just not do the -T and leave it to the regular ssh_config to make a decision on how to do it?
Thanks!
I tried following the example provided by std::process::Stdio::piped()
unfortunately doesn't work because of the following:
the trait bound `openssh::Stdio: From<tokio::process::ChildStdout>` is not satisfied E0277 the trait `From<tokio::process::ChildStdout>` is not implemented for `openssh::Stdio` Help: the following other types implement trait `From<T>`:
Short example snippet:
async fn ... {
....
let source = Command::new("util1")
.stdout(Stdio::piped())
.spawn()
.unwrap()
.stdout
.unwrap();
let destination = remote.command("util2")
.stdin(source) // Incorrect
.spawn()
.await
.unwrap();
// TODO: Alterative?
}
What would be the correct/best way to pipe local process' stdout into remote process' stdin with this crate? Sorry if this question is too basic, I can't wrestle the compiler that well yet :)
The contribution section needs an update, it needs to indicate how to run integration tests on this project.
Allow to set env during command sending process
Error, as specified in the docs
Unfortunately, not possible as of now
Hi, I was wondering if keeping a nested (doubly) ssh session was possible using this crate.
I need to do the following thing:
connect to server 1
from server 1, connect to many other servers, that are reachable only from server 1
I cannot do SSH tunnel with my setup.
Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.