Giter Club home page Giter Club logo

viceroy's Introduction

Viceroy

Viceroy provides local testing for developers working with Fastly Compute. It allows you to run services written against the Compute APIs on your local development machine, and allows you to configure testing backends for your service to communicate with.

Viceroy is normally used through the Fastly CLI's fastly compute serve command, where it is fully integrated into Compute workflows. However, it is also a standalone open source tool with its own CLI and a Rust library that can be embedded into your own testing infrastructure.

Installation

Via the Fastly CLI

As mentioned above, most users of Compute should do local testing via the Fastly CLI, rather than working with Viceroy directly. Any CLI release of version 0.34 or above supports local testing, and the workflow is documented here.

As a standalone tool from crates.io

To install Viceroy as a standalone tool, you'll need to first install Rust if you haven't already. Then run cargo install viceroy, which will download and build the latest Viceroy release.

Usage as a library

Viceroy can be used as a Rust library. This is useful if you want to run integration tests in the same codebase. We provide a helper method handle_request. Before you build or test your code, we recommend to set the release flag e.g. cargo test --release otherwise, the execution will be very slow. This has to do with the Cranelift compiler, which is extremely slow when compiled in debug mode. Besides that, if you use Github Actions don't forget to setup a build cache for Rust. This will speed up your build times a lot.

Usage as a standalone tool

NOTE: the Viceroy standalone CLI has a somewhat different interface from that of the Fastly CLI. Command-line options below describe the standalone Viceroy interface.

After installation, the viceroy command should be available on your path. The only required argument is the path to a compiled .wasm blob, which can be built by fastly compute build. The Fastly CLI should put the blob at bin/main.wasm. To test the service, you can run:

viceroy bin/main.wasm

This will start a local server (by default at: http://127.0.0.1:7676), which can be used to make requests to your Compute service locally. You can make requests by using curl, or you can send a simple GET request by visiting the URL in your web browser.

Usage as a test runner

Viceroy can also be used as a test runner for running Rust unit tests for Compute applications in the following way:

  1. Ensure the viceroy command is available in your path
  2. Add the following to your project's .cargo/config:
[build]
target = "wasm32-wasi"

[target.wasm32-wasi]
runner = "viceroy run -C fastly.toml -- "
  1. Install cargo-nextest
  2. Write your tests that use the fastly crate. For example:
#[test]
fn test_using_client_request() {
    let client_req = fastly::Request::from_client();
    assert_eq!(client_req.get_method(), Method::GET);
    assert_eq!(client_req.get_path(), "/");
}

#[test]
fn test_using_bodies() {
    let mut body1 = fastly::Body::new();
    body1.write_str("hello, ");
    let mut body2 = fastly::Body::new();
    body2.write_str("Viceroy!");
    body1.append(body2);
    let appended_str = body1.into_string();
    assert_eq!(appended_str, "hello, Viceroy!");
}

#[test]
fn test_a_handler_with_fastly_types() {
    let req = fastly::Request::get("http://example.com/Viceroy");
    let resp = some_handler(req).expect("request succeeds");
    assert_eq!(resp.get_content_type(), Some(TEXT_PLAIN_UTF_8));
    assert_eq!(resp.into_body_str(), "hello, /Viceroy!");
}
  1. Run your tests with cargo nextest run:
 % cargo nextest run
   Compiling unit-tests-test v0.1.0
    Finished test [unoptimized + debuginfo] target(s) in 1.16s
    Starting 3 tests across 1 binaries
        PASS [   2.106s] unit-tests-test::bin/unit-tests-test tests::test_a_handler_with_fastly_types
        PASS [   2.225s] unit-tests-test::bin/unit-tests-test tests::test_using_bodies
        PASS [   2.223s] unit-tests-test::bin/unit-tests-test tests::test_using_client_request
------------
     Summary [   2.230s] 3 tests run: 3 passed, 0 skipped

The reason that cargo-nextest is needed rather than just cargo test is to allow tests to keep executing if any other test fails. There is no way to recover from a panic in wasm, so test execution would halt as soon as the first test failure occurs. Because of this, we need each test to be executed in its own wasm instance and have the results aggregated to report overall success/failure. cargo-nextest handles that orchestration for us.

Documentation

Since the Fastly CLI uses Viceroy under the hood, the two share documentation for everything other than CLI differences. You can find general documentation for local testing here, and documentation about configuring local testing here. Documentation for Viceroy's CLI can be found via --help.

Colophon

Viceroy

The viceroy is a butterfly whose color and pattern mimics that of the monarch butterfly but is smaller in size.

viceroy's People

Contributors

acfoltzer avatar acme avatar acw avatar alexcrichton avatar aturon avatar bbutkovic avatar computermouth avatar elliottt avatar fgsch avatar geekbeast avatar ha0li avatar integralist avatar itsrainy avatar jakechampion avatar jameysharp avatar jedisct1 avatar joeshaw avatar kailan avatar katef avatar kination avatar kpfleming avatar mgattozzi avatar mhp avatar pchickey avatar phamann avatar silentbicycle avatar starptech avatar tetsuharuohzeki avatar tschneidereit avatar ulyssa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

viceroy's Issues

Instrument and emit stats on program wall clock duration

Fastly's UI provides stats on the total wall clock of a Compute@Edge program running in production. Viceroy should emit information about the wall clock duration of a single instance execution, so that users can reason about the behavior of their program in production.

Add a mode to execute a Wasm program without an incoming request

Viceroy should be able to execute a C@E Wasm program from the command line as just a one-shot without requiring an external process to send it a request. This would enable the use of Viceroy to run test suites that are compiled to a wasm32-wasi target with the C@E Wasm imports, such as those produced by cargo test --target wasm32-wasi.

The wasmtime CLI could be a useful model here. In a Rust project, I am able to run my test suite in both native code and Wasm simply by adding a .cargo/config:

[target.wasm32-wasi]
runner = "wasmtime"

In addition to providing the one-shot execution model, wasmtime also hooks up stdio and such to make it look like the Wasm program is executing within the host environment. That's pretty different from the headless daemon approach that Viceroy currently supports.

Finally, we'll need to figure out what to do about hostcalls. For the purposes of running a test suite, we'd get a lot of value even if the C@E hostcalls didn't do all that much. There are many useful tests to write that don't even use the C@E APIs, and many more that only would need, e.g., the ability to read and write to a body. Eventually I could imagine wanting some more sophisticated behavior even in a unit test, but since the daemonized version is already there I don't think the needs are as pressing.

Serve over HTTPS

πŸ‘‹ Have we considered serving fastly compute serve over HTTPS instead of HTTP? My application isn’t working because I set HTTPS-only cookies, and I don’t want to make too many changes to the application so that it works locally.

Here's a screenshot from Google Chrome developer tools showing a warning about the Set-Cookie header:

Screenshot_2021-07-20_at_10_18_09

cannot find function `get_region` in module `os`

error[E0425]: cannot find function `get_region` in module `os`
   --> /home/jochen/.cargo/registry/src/github.com-1ecc6299db9ec823/region-2.2.0/src/lib.rs:133:7
    |
133 |   os::get_region(page::floor(address as usize) as *const u8)
    |       ^^^^^^^^^^ not found in `os`

   Compiling jobserver v0.1.22
error: aborting due to previous error

Allow host overrides in TOML configuration

One option exposed in Fastly's management UI is the ability to provide an "override host." See this guide for more information. The core premise is that the host header will be rewritten with a different host than is in the outbound URL.

We should expose a setting in our TOML schema to do the same thing when running a program locally.

Fastly crate panicking when a dictionary item doesnt exist

While doing some testing today locally, i came across a panic on the fastly crate. This seems to be happening because the dictionary item lookup fails because the item does not exist in the dictionary:
Aug 27 16:50:20.450 ERROR request{id=0}: Hostcall yielded an error: Unknown dictionary item: Manifest-2d98131 thread 'main' panicked at 'fastly_dictionary::get returned an unrecognized result', /Users/mmohammed/.cargo/registry/src/github.com-1ecc6299db9ec823/fastly-0.7.3/src/dictionary/handle.rs:81:23

I do not see this error on C@E.

'Cannot wait on pending future: must enable wiggle "async" future and execute on an async Store'

I'm experimenting with a wasm guest program that makes several calls to pending_req_poll. My program works as expected when deployed to c@e however loading it into Viceroy causes a panic.

Jul 13 18:35:51.785  INFO request{id=0}: handling request GET http://localhost:7878/
thread 'tokio-runtime-worker' panicked at 'Cannot wait on pending future: must enable wiggle "async" future and execute on an async Store', ~/.cargo/registry/src/github.com-1ecc6299db9ec823/wiggle-0.28.0/src/lib.rs:946:13
stack backtrace:
   0: std::panicking::begin_panic
   1: wiggle::run_in_dummy_executor
   2: <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
   3: <F as wasmtime::func::IntoFunc<T,(wasmtime::func::Caller<T>,A1,A2,A3,A4),R>>::into_func::wasm_to_host_shim
   4: <unknown>
   5: wasmtime_runtime::traphandlers::catch_traps::call_closure
   6: _RegisterSetjmp
   7: wasmtime_runtime::traphandlers::catch_traps
   8: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
   9: wasmtime_fiber::unix::fiber_start
  10: _wasmtime_fiber_start
  11: wasmtime_fiber::Fiber<Resume,Yield,Return>::resume
  12: <wasmtime::store::<impl wasmtime::store::context::StoreOpaqueSend>::on_fiber::{{closure}}::FiberFuture as core::future::future::Future>::poll
  13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  14: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
  15: tokio::runtime::task::harness::poll_future
  16: tokio::runtime::task::harness::Harness<T,S>::poll
  17: std::thread::local::LocalKey<T>::with
  18: tokio::runtime::thread_pool::worker::Context::run_task
  19: tokio::runtime::thread_pool::worker::Context::run
  20: tokio::macros::scoped_tls::ScopedKey<T>::set
  21: tokio::runtime::thread_pool::worker::run
  22: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
  23: std::panicking::try
  24: tokio::runtime::task::harness::Harness<T,S>::poll
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
thread 'tokio-runtime-worker' panicked at 'guest worker finished without panicking: JoinError::Panic(...)', lib/src/execute.rs:189:18
stack backtrace:
   0: rust_begin_unwind
             at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/std/src/panicking.rs:515:5
   1: core::panicking::panic_fmt
             at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/panicking.rs:92:14
   2: core::result::unwrap_failed
             at /rustc/657bc01888e6297257655585f9c475a0801db6d2/library/core/src/result.rs:1355:5
   3: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
   4: hyper::proto::h1::dispatch::Dispatcher<D,Bs,I,T>::poll_catch
   5: <hyper::server::conn::upgrades::UpgradeableConnection<I,S,E> as core::future::future::Future>::poll
   6: <hyper::server::conn::spawn_all::NewSvcTask<I,N,S,E,W> as core::future::future::Future>::poll
   7: tokio::runtime::task::harness::poll_future
   8: tokio::runtime::task::harness::Harness<T,S>::poll
   9: std::thread::local::LocalKey<T>::with
  10: tokio::runtime::thread_pool::worker::Context::run_task
  11: tokio::runtime::thread_pool::worker::Context::run
  12: tokio::macros::scoped_tls::ScopedKey<T>::set
  13: tokio::runtime::thread_pool::worker::run
  14: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
  15: std::panicking::try
  16: tokio::runtime::task::harness::Harness<T,S>::poll
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

Location of the panic from the backtrace: https://docs.wasmtime.dev/api/src/wiggle/lib.rs.html#946

It would appear that the async feature for wiggle is enabled: https://github.com/fastly/Viceroy/blob/main/lib/Cargo.toml#L41

I don't have a minimal repro I can share publicly but I'm happy to send my wasm file upon request!

Instrument and emit stats on program RAM usage

Fastly's UI provides stats on the RAM usage of a Compute@Edge program running in production. Viceroy should emit information about the memory usage of a single instance execution, so that users can reason about the behavior of their program in production.

Implement caching directive support

As mentioned in the project's README, we do not currently support all of the features provided by Compute@Edge's production environment.

One missing feature is caching behavior. This is a tracking issue to document ongoing work to implement this support.

Ensure hostcall error codes match C@E

We need to do a careful pass over the pipeline through which Viceroy yields errors back to the guest, to ensure that error codes match production C@E in all cases.

Align port defaults

The Fastly CLI defaults to port 7676, while the Viceroy CLI defaults to port 7878.

Implement geolocation support

As mentioned in the project's README, we do not currently support all of the features provided by Compute@Edge's production environment. One of these missing features is geolocation.

This is a tracking issue to document ongoing work to implement this support.

Broaden platform support on both CI and releases

The Fastly CLI supports some platforms that Viceroy releases don't currently cover, such as arm64. We should add these platforms to both CI and release automation, and make sure the CLI can distribute the new binaries.

RFC: Reconsider behavior when `-C` is not passed

Currently if you don't provide -C, Viceroy will simply not load any configuration.

Normally -C is automatically provided by the Fastly CLI, but when invoking Viceroy directly it probably makes sense to default to fastly.toml if no manifest is provided. If fastly.toml is not found, then we can proceed without configuration.

Thoughts?

Add Issue Templates

GitHub offers issue templates, to help provide some structure to things like bug reports, feature requests, et. al. We should take advantage of that, and add some of these to the project.

Instrument and emit stats on program CPU time

Fastly's UI provides stats on the CPU time of a Compute@Edge program running in production. Viceroy should emit information about the duration of CPU time used for a single instance execution, so that users can reason about the behavior of their program in production.

Support `--watch`ing and auto-reloading binary upon changes

During development, it's extremely convenient to have the service automatically reload on change. This is a tracking issue for implementing "watching" functionality.


Open Questions:

  • It might make sense to have it be the default behavior, perhaps with the ability to explicitly disable it?
  • Should Viceroy, or the Fastly CLI be responsible for watching files and reloading upon changes? How should the list of files to watch be specified, given that different languages must be supported? (cc: @Integralist)

Originally reported by @tschneidereit; cc: @aturon, @peterbourgon.

Interactive Mode

In addition to running as a daemon, there should be an option (given via a CLI flag) to run Viceroy "interactively", meaning that requests are provided in sequence via stdin and responses via stdout. This configuration will allow Viceroy to be more easily used in test harnesses that are not equipped to run and interact with a daemon.

Reported by @aturon.

Add more contextual information to errors from external libraries

A number of Viceroy's error variants are automatic conversions from external library errors -- things like http's errors for invalid headers or URIs, etc.

Most of those underlying errors don't provide any contextual information, such as which header or URI was provided that failed to be valid.

Rather than automatically converting these errors, we should provide our own error variants with contextual information that we can present to users in the error trace.

Add `publish.rs` to top-level workspace

Today, we have a helper script to check that all of the packages in the workspace are publishable. This script was heavily inspired from a similar facility in wasmtime, which goes out of its way to avoid incurring any dependencies.

In conversation with @aturon however, we agreed that there are some benefits to allowing dependencies to exist. For example, the code used to parse out the Cargo.toml manifest is a bit ungainly. Additionally, we can use something like clap to handle parsing out command-line options.

Implement TLS information support

As mentioned in the project's README, we do not currently support all of the features provided by Compute@Edge's production environment. One of these missing features is acquiring TLS connection information.

This is a tracking issue to document ongoing work to implement this support.

Viceroy exits with a nonzero status code on ctrl-C

ugh /tmp/demo-app ~/Library/ApplicationSupport/fastly/viceroy -V
viceroy 0.2.4

ugh /tmp/demo-app ~/Library/ApplicationSupport/fastly/viceroy --config fastly.toml main.wasm
Sep 09 23:28:19.458  INFO checking if backend 'httpbin' is up
Sep 09 23:28:20.042  INFO backend 'httpbin' is up
Sep 09 23:28:20.043  INFO Listening on http://127.0.0.1:7878
^C

ugh /tmp/demo-app echo $status
130

This causes fastly compute serve to interpret a normal execution as having errored, which prints a big red message that is scary and bad 😨 and also not really true I guess? See fastly/cli#400.

tl;dr: Viceroy should return exit code 0 on ctrl-C/SIGINT

Address `wiggle_entity!` todo

Currently, we have a file lib/src/wiggle_abi/entity.rs that contains a todo note worth addressing soon: https://github.com/fastly/Viceroy/blob/main/lib/src/wiggle_abi/entity.rs#L9-L21 As the comment mentions, it exists because we must construct our handle types differently than usual. This is something that could be addressed most likely by filing a patch to cranelift-entity. For reference, the upstream macro can be found here: https://github.com/bytecodealliance/wasmtime/blob/03077e0de9bc5bb92623d58a1e5d78b828fd1634/cranelift/entity/src/lib.rs

I've filed and self-assigned bytecodealliance/wasmtime#3047 for this!

Remove restriction on `.wasm` extension

Currently, Viceroy expects that the given binary has a .wasm extension. This causes hassle for some users, aside from the fact that .wat text format programs would also be acceptable to Viceroy. Let's remove this restriction, and instead provide a helpful error message in the event of an invalid Wasm binary.

Reported by @fgsch.

Add close functionality to RequestHandle/ResponseHandle and update BodyHandle/StreamingBodyHandle

In order to release a new version of the Rust SDK where we add or update Drop impls for (Request/Response/Body/StreamingBody)Handle so that resources are automatically dropped when out of scope from the Session, we need to update Viceroy and get out a new release so that users don't try to do local compute and then have all of their code break. Especially given how these close calls are automatic and might cause people to be confused if trying out locally vs on the fleet where these changes have all rolled out. This issue is to track that progress. I've already opened up the the corresponding PR needed to the ABI here: fastly/compute-at-edge-abi#6. I've self assigned given the internal work was also done by me and I just want to track this work in our issues.

Remove `TryInto` for `Raw` and remove `Raw` config types

We have a couple of configs where we first deserialize to a Raw config type and then call try_into to convert to the validated and massaged final config type. This is because we're only using the default Deserialize implementation for config types. We can remove this extra intermediate step and type by either creating a Deserialize implementation ourselves or by using things like deserialize_with for certain fields as needed.

This isn't a high priority item, but would help make the code base a bit easier to work with in some ways. I'm happy to explain in more detail what needs to be done if someone needs more context/guidance in order to understand what to do or if someone does know how to do it then I will review that PR for them. Otherwise I can get around to this at some point in the future if there isn't interest in doing it.

Request to have memory_pages (minimum page size) be configurable

Thank you for building Viceroy, this is such a great project - I really really like that it is now possible to run Fastly code locally πŸŽ‰

I've been working on porting https://polyfill.io (a Fastly sponsored project) to be fully in Compute@Edge using the JavaScript SDK.

As of right now, due to the current lack of Edge Dictionary support in Viceroy (#11), I've inlined all the polyfills into the C@E code, this makes the bundled JavaScript file become 75.8 MB. After compiling the bundled JavaScript file into the Compute@Edge WASM file, it becomes 614 MB.

When trying to run this WASM file using Viceroy, an error is thrown:

Jul 31 20:33:53.927 ERROR memory index 0 has a minimum page size of 9469 which exceeds the limit of 2048

Is the limit of 2048 a hard limit of the Fastly Compute@Edge platform?
Would it be possible to have this limit raised of be configurable within Viceroy?

Update: I cloned Viceroy and changed the minimum page size from 2,048 to 10,240, when running Viceroy with the wasm file it works and it used 620.6 MB of WebAssembly heap

Handle body-less response types appropriately

We don't currently treat response types like 204 in any special way, but hyper's default behaviors don't quite match either the HTTP spec or what C@E does. We need to follow the latter, and add tests to ensure the behavior is matched.

Reported by @aturon.

RFC: How should logging verbosity work

Today our logging uses default values of INFO for viceroy-cli and viceroy-lib. For one v flag (-v) it's set at DEBUG, and for two v's (-vv) it's set to TRACE. This is fine as a beginning step for being able to have better instrumentation in Viceroy that's not just println and eprintln, but we should consider what other kinds of information would we want to expose.

Maybe hyper or tokio tracing events? Something else?

The implementation does allow users to set RUST_LOG themselves to bypass our default settings, but it would be nice to provide an experience where adding -v provides useful information to users without overwhelming them as well as making it easy to use. The existing implementation is subject to change after this RFC, and is meant to be a bare minimum.

This RFC exists so we can ask what other folks' opinions are on logging.

Originally reported by @mgattozzi.

Report host-side heap usage

Viceroy currently reports the final wasm heap size each time it completes a request. For greater fidelity to C@E, we should report the host heap usage as well, since both are counted against the overall RAM limit.

Part of #8.

Implement edge dictionary support

As mentioned in the project's README, we do not currently support all of the features provided by Compute@Edge's production environment. One of these missing features is Edge Dictionaries.

This is a tracking issue to document ongoing work to implement this support.

Specify and document versioning policy

We need to document how we version our artifacts. Namely:

  • viceroy-lib, the library providing functionality to run Compute@Edge binaries.
  • viceroy-cli, the binary that is installed by users, which provides a daemon to run these programs locally.
  • The relationship between these two; i.e. which -lib versions can be used by which -cli versions, and what constitutes a breaking change for each.

Package `README.md` with `viceroy`

See also #41, and #45.

We should package this project's README.md with the viceroy crate, so that the helpful information contained within is visible to people visiting the project on crates.io! πŸ“¦

Rework CI and trap test for faster check times

Currently, our test for handling traps involves a feature flag, which means CI jobs must compile viceroy-cli and viceroy-lib twice. We should either parallelize this in a separate job, or consider reworking it to work slightly differently.

Add more crate metadata

Our crates viceroy-lib and viceroy-cli could use a pass through their metadata. For example, when running cargo package, we see these warnings:

; cargo publish --dry-run --manifest-path=cli/Cargo.toml 
    Updating crates.io index
warning: manifest has no documentation, homepage or repository.

So, at a bare minimum these fields would be great to add! Categories and keywords might also be useful. I'm not sure what other fields are worth including, i.e. badges for GitHub, but wasmtime might be a good place to look for more inspiration: https://github.com/bytecodealliance/wasmtime/blob/main/Cargo.toml#L101

Add `CONTRIBUTING.md` and other documents

Many projects provide a CONTRIBUTING.md file that includes some helpful information for people that are, as the name suggests, contributing to a project.

A CODE_OF_CONDUCT.md might also be a worthwhile addition here, while we're at it. ❀️

Fastly CLI reports error when using is using ARM macOS

βœ‹πŸ» Hello

We've had a report from @yusukebe that the Fastly CLI is displaying an error to say that a corresponding Viceroy asset cannot be found:

ERROR: error downloading latest Viceroy release: no asset found for your OS (darwin) and architecture (arm64).

The error is accurate because there is indeed no ARM asset produced for Viceroy.

I realised there was a prior conversation regarding ARM support, see comments from the Windows support PR:

There are dependencies on C and assembly code (ex: wasmtime-fiber), 
so cross-compilation is not going to work unless you install an additional toolchain.

Long story short, we have to use qemu in order to build the ARM packages.

Not working on Ubuntu 18.4 (Glibc <2.29)

Hello! I am still running Ubuntu 18.4, apart from upgrading to a newer Ubuntu version, is there a way of running Viceroy with Docker, or another way that doesn't rely on a newer version of glibc on the host machine?

Thanks!

Missing build for Linux 386

Originally reported by @epolish fastly/cli#419.

Summary:
When using the Fastly CLI it reports an error when trying to download a version of Viceroy for the Linux OS and 386 Arch. This is because there is no such build in the Viceroy release.

[macOS] Viceroy doesn't support TLS 1.3

On macOS, Viceroy is unable to connect to upstream servers that only accept TLS 1.3.

Trace:

wiggle abi{module="fastly_http_req" function="send"}: h=RequestHandle(1) b=BodyHandle(2) backend=*guest 0x100f5b/24
result=Err(HyperError(hyper::Error(Connect, Error { code: -9836, message: "bad protocol version" })))
Hostcall yielded an error: error trying to connect: bad protocol version

An example of such a server is odoh-target.alekberg.net.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.