neonmoe / minreq Goto Github PK
View Code? Open in Web Editor NEWSimple, minimal-dependency HTTP client.
Home Page: https://crates.io/crates/minreq
License: ISC License
Simple, minimal-dependency HTTP client.
Home Page: https://crates.io/crates/minreq
License: ISC License
I ran into an issue when doing a head
requests where the response is not returned:
extern crate minreq;
fn main() {
let url = "https://httpbin.org/status/418";
let response = minreq::head(url)
.send()
.unwrap();
let headers = response.headers;
println!("{:#?}", headers);
}
When a changed the http method to get
it worked.
I suspect that the read_from_stream
function never quits the loop.
One way to solve this issue it to change the response body to be a stream, that is we only read the http status and headers from tcp stream and pass the body as a stream to the caller.
That way, if the body is empty (e.g. in head requests) we never block indefinitely.
let url = "https://github.githubassets.com/images/modules/site/logos/ibm-logo.png"
if let Ok(response) = minreq::get(url).send() { }
thread '' panicked at 'byte index 1703 is not a char boundary; it is inside 'ç' (bytes 1702..1704) of `HTTP/1.1 200 OK
12: <alloc::string::String as core::ops::index::Index<core::ops::range::RangeFrom>>::index
at \src\liballoc/string.rs:1971
13: minreq::connection::read_from_stream
at \minreq-1.2.1\src/connection.rs:191
As (partly) fixed in #33, the redirection code doesn't quite do what the RFC describes. I'm opening this issue so I remember to fix it later.
On some websites, e.g. http://mockup.love, minreq fails with the following error:
infinite redirection loop detected
Firefox and curl work fine.
80,000+ websites out of the top million from Feb 3 Tranco list are affected.
Tested using this code. Test tool output from all affected websites: minreq-infinite-redirection.tar.gz
I admit, I am a bit surprised to see a check for this particular condition as opposed to simply setting a limit on the number of redirections.
I was using minreq and I faced a strange bug making my program hang. I figured out it was coming from minreq. After debugging I found the origin of the bug.
response.rs, line 35
for byte in &mut parent {
let (byte, length) = byte?;
body.reserve(length);
body.push(byte);
}
That loop never ends sometimes. I will try to fix that
On some websites, e.g. http://ticketsnow.com, minreq fails with the following error:
non-usize chunk length with transfer-encoding: chunked
Firefox and curl work fine.
175 websites out of the top million from Feb 3 Tranco list are affected.
Tested using this code. Test tool output from all affected websites: minreq-cannot-chuck-the-chunk.tar.gz
On some websites, e.g. http://naturalresourceswales.gov.uk, minreq fails with the following error:
received corrupt message
Firefox and curl work fine.
90,000+ websites out of the top million from Feb 3 Tranco list are affected.
Tested using this code. Test tool output from all affected websites: minreq-corrupt-message.tar.gz
minreq appends \r\n
to request body here. I couldn't find a good reason for that.
This is currently broken behaviour. Some servers and frameworks do not return status reason phase. So a 200 without the "OK" will result in 503 which is not the case.
See spring-projects/spring-boot#6548
Futhermore RFC7230 specifies that this reason phase should be ignored:
https://datatracker.ietf.org/doc/html/rfc7230#section-3.1.2
A client SHOULD ignore the reason-phrase content.
Noticed this while writing tests for URL encoding, if you don't specify http(s):// at the start of a URL it returns garbage data. Would probably be an idea to account for, be it by handling as implicit http, or error
minreq::Error::IoError
(and possibly others) have both a Display
impl that prints the error message, and return the contained error as a source
, so this happens:
use anyhow::{Context, Result};
fn main() -> Result<()> {
minreq::get("http://127.0.0.1:1234").send().context("[context]")?;
Ok(())
}
Error: [context]
Caused by:
0: Connection refused (os error 111)
1: Connection refused (os error 111)
Not quite sure what should be done here instead.
minreq sometimes fails to uphold the request timeout and exceeds it by at least 50%.
In my test this happened on 28033 websites out of the top million from Feb 3 Tranco list.
Tested using this code, and an external timeout on the entire process set to 60 seconds.
Test tool output from all affected websites: minreq-timeouts.tar.gz
A similar issue has been observed with other clients (e.g. ureq and attohttpc) due to them resetting the timeout on every redirection, but there it was happening on 15 websites or so, not 28,000. So I suspect something deeper is at play.
minreq will attempt to allocate a very large amount of memory and OOM if given a content-length header with a very large value. Tested on minreq 2.4.2.
Steps to reproduce:
content_length_megalomania.sh
The exact code and Cargo.lock used in my tests can be found here.
cc @jhwgh1968 - your script found a bug!
In current implementation, once response status line malformation is detected status code set to 503. Do we need to proceed further and read the headers and body?
Enabling https-rustls is 600K larger than enabling rustls in reqwest, which is not a mini.
Would it look something like this?
minreq::post(format!("http://127.0.0.1:8000{}", page))
.with_body(b"x=1".to_vec())
.send()
.unwrap();
When I use this, somewhere along the line "x=1"
gets transformed to x=1
. It might be on the receiving side though. I just don't know (yet).
Request may take a long time than a configured timeout or even hang because TCP connect phase has no timeout.
Line 236 in cdfd2a4
minreq::Request::new(Method::Get, "http://54.158.248.248:91").with_timeout(1000).send();
Above code will hang for a long time even though we have configured timeout for 1sec.
@neonmoe Happy to work on resolving this once you confirm that this is actually an issue.
minreq panics on fetching some websites, e.g. virtualflorist.com
Backtrace:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: InvalidDNSNameError', src/libcore/result.rs:1189:5
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:77
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:59
4: core::fmt::write
at src/libcore/fmt/mod.rs:1057
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1426
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:62
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:49
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:195
9: std::panicking::default_hook
at src/libstd/panicking.rs:215
10: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:472
11: rust_begin_unwind
at src/libstd/panicking.rs:376
12: core::panicking::panic_fmt
at src/libcore/panicking.rs:84
13: core::result::unwrap_failed
at src/libcore/result.rs:1189
14: minreq::connection::Connection::send_https
15: minreq::request::Request::send_lazy
16: minreq::connection::handle_redirects
17: minreq::connection::Connection::send
18: minreq::request::Request::send_lazy
19: minreq::connection::handle_redirects
20: minreq::connection::Connection::send
21: minreq::request::Request::send
22: minreq_test::main
23: std::rt::lang_start::{{closure}}
24: std::rt::lang_start_internal::{{closure}}
at src/libstd/rt.rs:52
25: std::panicking::try::do_call
at src/libstd/panicking.rs:296
26: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:79
27: std::panicking::try
at src/libstd/panicking.rs:272
28: std::panic::catch_unwind
at src/libstd/panic.rs:394
29: std::rt::lang_start_internal
at src/libstd/rt.rs:51
30: std::rt::lang_start
31: __libc_start_main
32: _start
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
List of 225 websites where this happens: minreq_dns_panics.txt
When following the example to use json replies with serde the rust compiler fails with the error
error[E0599]: no method named `json` found for struct `minreq::Response` in the current scope
--> examples/main.rs:66:36
|
66 | let s = nominatim_overview.json::<T>().expect("could not parse as string");
| ^^^^ method not found in `minreq::Response`
My code
let osm_id = 55765;
let constructed_url = format!("http://server-backup.loc:8081/details.php?osmtype=R&osmid={}&class=boundary&polygon_geojson=1&format=json", osm_id);
let nominatim_overview = minreq::get(constructed_url).send().expect("could not send request");
let s = nominatim_overview.json::<OsmDetails>().expect("could not parse reply");
The request works fine with as_str()
Currently the crate will ignore "https://" and send the request as raw text to the port 80 anyway. HTTPS will be implemented with the rustls crate, and it'll be a feature to avoid polluting the dependencies.
Minreq is so simple and easy and I like the direction its going. However, I think it has some issues with some APIs. For example, when trying to connect to vultr.com api at https://api.vultr.com/ to get a list of all operating systems via https://api.vultr.com/v1/os/list endpoint, bearing in mind that this endpoint does not need an api key, I get a 503 Server did not provide a status line
response with no headers or body.
Also, if there is a successful response from this server even with API keys accepted where required, the responses are correct with headers but the json body is empty.
With this API, all other API tools like cURL, Rested and Postman give a successful response with the correct JSON body. I dont understand the problem, could the issue be that vultr api is upgrading to http2 from http1 and that minreq is not supporting it?
Here's the log output:
https://gist.github.com/spearman/28a10c35845a49deb0145929c959ab9d
Trying another url (https://canihazip.com/s) gives 'BadRecordMac' error instead of UnexpectedMessage.
Hi everyone. I hope all of you are well.
I was trying this package for the first time, and I trying to run this code:
fn main() {
let response = minreq::get("http://httpbin.org/ip").send();
println!("{:?}",response.status_code);
}
And every time I run it I get this error "no field status_code
on type Result<Response, minreq::Error>
"
The version of rustc I using is 1.64.0.
Great package by the way.
Have a nice day.
要怎么 发送和读取 HTTP的协议版本 正常应该是HTTP/1.1
但是有些服务器只支持 HTTP/1.0 或 HTTP/0.9
let mut file = fs::File::open(&dir_temp.join(constants::TEMP_NAME)).unwrap();
let mut buffer = Vec::new();
file.read_to_end(&mut buffer).unwrap();
let response = minreq::post(constants::URL)
.with_header("Content-Type", "multipart/form-data")
.with_header("Content-Length", &buffer.len().to_string())
.with_body(buffer)
.send()
.unwrap();
I tried this but don't work :(
Line 286 in cdfd2a4
println!()
statements that appear to be debug logging. It probably shouldn't be enabled by default in releases published on crates.ioI am using minreq in a crawler so my program is making a lot of requests.
Sometimes minreq hangs when loading a website.
100% of a processor core is used until I kill the program.
I've been working on the dev-2.0
branch for a while now, and I've published an initial (or possibly final, if nothing comes up) version of the crate's 2.0 version on crates.io, 2.0.0-alpha.1
. Crates.io, docs.rs.
Here's a list of the planned/completed features of 2.0:
Response
's api to be more comfortable, as it was deemed insufficient back in issue #13.Iterator
during the download, to avoid long periods of blocking for larger downloads. Implemented in the form of ResponseLazy
, which is returned from send_lazy()
.
minreq::Error
type, to unify the returned Result types from various functions (which makes using ?
easier), and to help debugging (via providing more precise information).In addition to these, there has been a lot of cleaning up in the internals.
If you have any comments about the new api, code changes, changelog discrepancies, wishes for more 2.0 changes, or anything else, I'd be glad to hear them. I'll push 2.0 out when there's nothing more to add or remove, ie. nothing has been committed or discussed for a month or so.
Add support for async rust
I don't see this in the docs but are connections persisted? Or are they dropped on response?
It seems we connect again for each request:
Line 244 in 1533698
If connections are not persisted, a connection-pool or a connection-cache would greatly improve additional connections (especially for https)
with_timeout()
is currently advertised as setting a timeout for the completion of the entire request. However, that's not what it actually does: internally the implementation translates it to read/write timeouts, which only applies to a single read. This means that an established connection will keep going indefinitely if the remote host keeps replying.
This allows denial-of-service attacks; see here for more details.
I am using minreq in my repo here with no other dep and my file size is 1.5 MB. What I am doing wrong?
While considering a breaking change release #13 (comment), it would nice to provide more way to handle response.
For now, minreq
will read all bytes into memory after receiving the response, it's convenient for response that only contains a few lines of text. When requesting a big chunk of data (for example, download a file), there may be some trouble.
How about the Response
struct holds the stream
as a field, since the http status line and headers appear before the body, we can still provide them in Response
as pub fields. Then using dedicated methods to actually decode the body only when needed.
struct Response<S> {
// status fields ...
stream: S,
}
impl<S> Response<S> {
fn into_string(self) -> Result<String> {}
fn into_vec(self) -> Result<Vec<u8>> {}
fn into_stream(self) -> S {}
}
minreq panics when downloading some websites, e.g. iheartradio.com
Backtrace:
thread 'main' panicked at 'assertion failed: self.is_char_boundary(at)', <::core::macros::panic macros>:3:10
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:77
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:59
4: core::fmt::write
at src/libcore/fmt/mod.rs:1057
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1426
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:62
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:49
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:195
9: std::panicking::default_hook
at src/libstd/panicking.rs:215
10: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:472
11: rust_begin_unwind
at src/libstd/panicking.rs:376
12: core::panicking::panic_fmt
at src/libcore/panicking.rs:84
13: core::panicking::panic
at src/libcore/panicking.rs:51
14: minreq::response::parse_header
15: minreq::response::ResponseLazy::from_stream
16: minreq::connection::Connection::send
17: minreq::request::Request::send
18: minreq_test::main
19: std::rt::lang_start::{{closure}}
20: std::rt::lang_start_internal::{{closure}}
at src/libstd/rt.rs:52
21: std::panicking::try::do_call
at src/libstd/panicking.rs:296
22: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:79
23: std::panicking::try
at src/libstd/panicking.rs:272
24: std::panic::catch_unwind
at src/libstd/panic.rs:394
25: std::rt::lang_start_internal
at src/libstd/rt.rs:51
26: std::rt::lang_start
27: __libc_start_main
28: _start
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
List of 221 websites where this happens: minreq_header_assert_failures.txt
If a user defines a Vec of headers like
#[derive(Debug)]
pub struct HTTPClient {
url: &'static str,
body: JsonValue,
headers: Vec<(String, String)>,
}
and they are trying to iterate on HTTPClient.headers
and adding those headers using .with_headers()
method results in an error, for example
let post_tx = minreq::post(self.url);
// Iterate over the added headers in the vec
self.headers.iter().for_each(|header| {
post_tx.clone().with_header(&header.0, &header.1);
});
post_tx.send()?
I think the issue is brought about by line
Line 140 in 878e7cb
Is there a way to get around this or can we use fn with_header(&mut self, ...)
instead of fn with_header(mut self, ...)
to ensure the iterator borrows instead of taking ownership
I recently had a project where it became necessary to accept locally installed certificates as well.
Since in my project the https-rustls feature was already used (and worked in a complex CI pipeline), I tried to extend the https-rustls feature to a https-rustls-probe feature by using https://github.com/ctz/rustls-native-certs.
This implementation seems to work. My changes can be viewed here https://gitlab.com/joedr/minreq/-/commit/6506e4582e61fe51ba80c9ffb5a4224aa39a660c
My question now would be if a pull request for such a feature would be welcomed. My changes are very small, but I would probably still need some feedback since I haven't done much with rust yet.
The fix for #49 has caused a regression in our use case.
For testing we use a server at a local port (4010
). The server generates callback URLs based on the Host
header.
The generated URLs changed from
http://127.0.0.1:4010/...
to
http://127.0.0.1/...
which no longer work correctly.
As per spec, the port should be included in the Host
header if it is non-default (80
for http
, 443
for https
). Never including the port is not quite correct and breaks the assumptions of servers (Flask/werkzeug in our case).
Workaround is to use version =2.4.0
.
Requests to the GitHub API consistently fail with IoError(Custom { kind: ConnectionAborted, error: "CloseNotify alert received" })
under rustls. reqwest
returns a response with code 403 instead.
How to reproduce:
https
featureminreq::get("https://api.github.com/zen").send().unwrap();
(note that the request works if the User-Agent
header is set)Minreq is currently unable to handle headers appearing multiple times in the response.
Only the last value is stored, as it is inserted over the previous values.
Line 303 in efbaf75
Hello,
Thank you for putting this crate out there, I love the simple API for simple Rust programs.
When using this crate with the HTTPS feature enabled, I am getting the following error for this specific URL I am testing. It works for other URL's, but somehow connecting to this particular server fails.
I'm not sure how to debug this further, but hopefully the following is useful for you.
fn main() {
let response = minreq::get("https://www.iex.nl/Beleggingsfonds-Koers/61114463/Vanguard-FTSE-All-World-UCITS-ETF.aspx").send().expect("Error fetching HTML");
}
$ RUST_BACKTRACE=full ./target/debug/vwrl
thread 'main' panicked at 'Error fetching HTML: IoError(Os { code: 104, kind: ConnectionReset, message: "Connection reset by peer" })', src/main.rs:7:20
stack backtrace:
0: 0x55ecd94178e4 - backtrace::backtrace::libunwind::trace::hd3cb661800925418
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
1: 0x55ecd94178e4 - backtrace::backtrace::trace_unsynchronized::h64b171cb1575b1ef
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
2: 0x55ecd94178e4 - std::sys_common::backtrace::_print_fmt::h3b631ab3a555d066
at src/libstd/sys_common/backtrace.rs:77
3: 0x55ecd94178e4 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h958356a888a9d47f
at src/libstd/sys_common/backtrace.rs:59
4: 0x55ecd9435f3c - core::fmt::write::h2d61f5c0328557bc
at src/libcore/fmt/mod.rs:1057
5: 0x55ecd94147a7 - std::io::Write::write_fmt::h0410c63bf2aeeef0
at src/libstd/io/mod.rs:1426
6: 0x55ecd9419985 - std::sys_common::backtrace::_print::hef794572a53c64d7
at src/libstd/sys_common/backtrace.rs:62
7: 0x55ecd9419985 - std::sys_common::backtrace::print::h3fc3a5e9c940e80e
at src/libstd/sys_common/backtrace.rs:49
8: 0x55ecd9419985 - std::panicking::default_hook::{{closure}}::h3e7698b7b1d66f6c
at src/libstd/panicking.rs:204
9: 0x55ecd9419671 - std::panicking::default_hook::h5728e803511699ca
at src/libstd/panicking.rs:224
10: 0x55ecd9419f8a - std::panicking::rust_panic_with_hook::h254ad17cf54d371c
at src/libstd/panicking.rs:472
11: 0x55ecd9419b70 - rust_begin_unwind
at src/libstd/panicking.rs:380
12: 0x55ecd94348d1 - core::panicking::panic_fmt::h157efb1de94e218e
at src/libcore/panicking.rs:85
13: 0x55ecd94346f3 - core::option::expect_none_failed::h2f37193330a95f9a
at src/libcore/option.rs:1198
14: 0x55ecd9025e72 - core::result::Result<T,E>::expect::hdda15be9e234232a
at /rustc/48840618382eccb8a799320c8e5d08e3b52f4c42/src/libcore/result.rs:990
15: 0x55ecd9021a38 - vwrl::main::hb5a44bbb096dada5
at src/main.rs:7
16: 0x55ecd90263b0 - std::rt::lang_start::{{closure}}::hd0431b761e50a1e6
at /rustc/48840618382eccb8a799320c8e5d08e3b52f4c42/src/libstd/rt.rs:67
17: 0x55ecd9419a53 - std::rt::lang_start_internal::{{closure}}::hcc1a2b66a85e517f
at src/libstd/rt.rs:52
18: 0x55ecd9419a53 - std::panicking::try::do_call::h6e00bbd8754db59b
at src/libstd/panicking.rs:305
19: 0x55ecd941cc17 - __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:86
20: 0x55ecd941a430 - std::panicking::try::h656a1c74080d7a83
at src/libstd/panicking.rs:281
21: 0x55ecd941a430 - std::panic::catch_unwind::hf379374ddffee909
at src/libstd/panic.rs:394
22: 0x55ecd941a430 - std::rt::lang_start_internal::h8bf94f6480420edb
at src/libstd/rt.rs:51
23: 0x55ecd9026389 - std::rt::lang_start::h301862297bf0bd6c
at /rustc/48840618382eccb8a799320c8e5d08e3b52f4c42/src/libstd/rt.rs:67
24: 0x55ecd9022cea - main
25: 0x7f876cd71153 - __libc_start_main
26: 0x55ecd902116e - _start
27: 0x0 - <unknown>
Does minreq handle redirections ?
I can't find if there's an environment variable controlling this, but issuing a get()
causes a lot of spew on stdout. This is a problem for my application, since it wants to write clean output to stdout.
Looking at version 1.0.2 cached on my system, I note a print!("{}", c)
in read_from_stream
in connection.rs
. So perhaps it's time for 1.0.3?
Add support for IP addresses like 127.0.0.1
In current implementation, if the response header contains ":\r\n" then it will produce ("", ""). Do we need to keep this?
context:
Line 475 in 242e50b
minreq will use an unbounded amount of memory if the server sends a single infinitely large header. This can be used to exhaust the memory on the machine and cause a denial of service.
You can reproduce the issue by running the following in Linux console and then connecting to localhost:8080
with minreq:
( echo -e "HTTP/1.1 200 OK\r"; echo -n "Huge-header: "; yes A | tr -d '\n' ) | nc -l localhost 8080
Tested using this code for minreq. You can inspect the Cargo.lock to know the exact dependency versions.
As mentioned in PR #32, minreq does not handle IP addresses in URLs properly right now. Sending the request via the basic HTTP backend seems to work, as well as https-native
. I couldn't test the https-bundled-*
features yet. So currently I'm assuming the problem only exists when using https-rustls
.
The following code (with the https-rustls
feature enabled) prints the first print normally, as expected, but crashes on IoError(Custom { kind: Other, error: InvalidDNSNameError })
for the second one.
fn main() -> Result<(), minreq::Error> {
println!(
"GET https://httpbin.org/get: {:?}",
minreq::get("https://httpbin.org/get").send()?.as_str()
);
println!(
"GET https://34.194.129.11: {:?}",
minreq::get("https://34.194.129.11/get").send()?.as_str()
);
Ok(())
}
Note: the problem here is that the error is about the DNS, it should actually error out a little later because of the certificates: httpbin.org's certificates only apply for *.httpbin.org
.
This is a small fix, I'll patch this at some point.
On some websites, e.g. http://serveblog.net, minreq fails with the following error:
invalid port value
Firefox and curl work fine.
126 websites out of the top million from Feb 3 Tranco list are affected.
Tested using this code. Test tool output from all affected websites: minreq-invalid-port.tar.gz
I've just migrated one of my projects from using reqwest
to minreq
and nearly halved the size of my binary, and massively improved my compile times (async begone), so huge thank you for this awesome library!
One thing that tripped me up while I was making the migration across though was needing to encode URLs/parameters. Obviously reqwest
'just works' in this regard, and so not being a web dev by trade, it took me a bit of poking to get to the bottom of the angry responses I was getting. (I'm not sure if it was supposed to, but enabling the punycode
feature didn't rectify the issue)
What I was wondering was if there was a case for using a library like urlencoding
(<250 SLoC; no dependencies) as a part of minreq
? It would make this library that bit more intuitive and easy to use. I understand that minreq
is intended to be, well, minimal so I wanted to propose the idea as an issue first. I could potentially write the PR for any of the below ideas
Suggestions:
urlencoding
as default dependency and always use iturlencoding
as an optional feature (always used when enabled)A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.