Giter Club home page Giter Club logo

distill's People

Contributors

aclysma avatar alec-deason avatar azriel91 avatar basil-cow avatar davidvonderau avatar ezpuzz avatar frizi avatar grzi avatar happenslol avatar jakobhellermann avatar jlowry avatar kabergstrom avatar pengowen123 avatar rua avatar tgolsson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

distill's Issues

Implement loading of asset dependency graphs

The rpc_loader should support automatically loading an asset's dependency graph before making the load visible to users.

The load_deps field in the AssetMetadata already exists, along with get_metadata_with_dependencies in the RPC server. This should be sufficient to implement the logic in the loader.

The whole asset graph should be kept uncommitted in the storage until all assets can be committed.

Use async-executor instead of tokio

bevy_task seems like a better fit for game stuff than tokio due to its flexible design, and async-executor is part of the bevy_task dependency tree.

Handling assets with path references in the build process

A big use-case for the asset pipeline will be to compile shaders into a platform-specific format. These shaders are usually written with #include statements to pull in other shader files when compiling. It would be good if we can support this.

I think it would be possible with the following:

  • AssetMetadata's build_deps that Importers return are AssetId instead of AssetUUID
  • After a full batch of imports is completed, try to resolve build_deps with AssetId::FilePath to AssetUUID. If it fails, raise an error.

This would only impact the import process, so the impact on the whole system will be minimized.

The reason I want build_deps to be correct is that it will be essential for implementing distributed builds in the future. If we know all the inputs required for a build (i.e. our dependency graph is correct), we can send this data to another machine and perform the build there, spreading the significant CPU load of the build process.

`AssetDaemon` does not load assets without a file notification event

Description:

When there are assets that exist under assets/the_asset.txt, even if the AssetDaemon has a "txt" importer, it will not be run unless you touch assets/the_asset.txt while the daemon is running.

This is a surprise/gotcha in a workflow where new assets are created and saved before running the daemon.

Expected behaviour:

When the AssetDaemon starts up, it both processes and imports new assets. I think it currently processes them but does not import them.

Impact:

User frustration / confusion.

Implement decompression in Loader

Decompression should be implemented in the loader and enabled in the asset pipeline for cached assets.
This work should probably be offloaded to a different thread to avoid blocking main thread for larger blobs, or be implemented such that it can keep within a frame budget.

Error handling

The daemon panics on a lot of common user errors like file parsing errors right now. It should propagate asset import errors to the metadata state so that it can be shown to the user.

The loader should also handle errors better. There are scenarios where it panics, and others where errors lead to infinite retries.

Replace scoped_threadpool

Currently I use scoped_threadpool when dispatching out source file processing. This should probably be replaced with tokio/futures UnorderedStream so that we can remove the dependency.

Make AssetUuid and AssetTypeId newtypes

Currently serde serialises these as sequences of bytes in text formats which is really human-unfriendly. Ideally they should be serialised as uuids in text format and as 16 bytes in binary.

Support type names for SerdeImportable

SerdeImportable is a really convenient way to define custom assets, but the type identifier is a UUID which can be annoying to use without tooling. It would be great to support type names in addition to UUIDs for identifying types.

Loader should have a frame budget

The loader and related systems should support an execution time budget that can be configured by users, preferably on the microsecond scale.

Implement an in-memory transport

By implementing an in-memory transport (just something in-process, like a channel) for RPC (both loader and asset hub server), tokio's IO dependencies can be made optional, greatly reducing the total number of dependencies.

Remove IPC transport

The UDS/named pipes transport in asset_hub_service is only marginally faster and seems to have a number of platform-specific error cases that I don't want to spend time debugging. We should remove it for now and replace it with a shared memory IPC later.

Loader LoadState processing should avoid redundant work

Currently the main LoadState state machine iterates through all LoadStates every frame. Generally it only needs to iterate over states that may change. I think avoiding processing of Loaded assets will be sufficient, and can probably be achieved by processing only when a ref is added or removed.

Design asset distribution case

While the asset daemon and RPC Loader works well for development environments, we should design distribution scenarios to support our current and upcoming target platforms.

  • Windows/OSX/Linux
  • Mobile (packaged with app, separately downloaded)
  • Consoles (Switch, XBox, PS4)
  • Web (WASM)

WASM and PC are probably highest priority.

Implement hot reloading in rpc_loader

Every time a new version of the metadata is created, the asset daemon's RPC server sends out a new snapshot to every registered listener (atelier_schema::service::asset_hub::listener). Currently, the rpc_loader replaces its active snapshot with the new one every time.

To implement hot reloading, the rpc_loader should check each new snapshot for changed assets and trigger a reload of the modified assets. I'm not sure what should happen with deleted assets, though.

Remove dependency on capnpc

The schema crate currently runs capnpc to generate code in build.rs, which incurs a dependency on capnpc for everyone using the project. By moving the contents of build.rs into a separate binary sub-crate that can be run manually, we can make sure capnpc is only necessary when a change to schema is necessary.

Handle reloading of assets that are dropped when device is lost

When a graphics device is lost, all assets that were loaded into the driver will be lost, and need to be reloaded from source. Ideally, this would be done synchronously to prevent inconsistent frames where some things are loaded and some are no longer loaded.

"there is no reactor running" when using packfile loader

In my app when updating the loader, I get

thread 'main' panicked at 'there is no reactor running, must be called from the context of a Tokio 1.x runtime', /Users/pmd/.cargo/registry/src/github.com-1ecc6299db9ec823/distill-loader-0.0.2/src/packfile_io.rs:176:13

RpcIO uses the runtime it creates (self.runtime.lock().unwrap() and passes it to process_requests) . I think PackfileReader tries to do this by calling self.0.runtime.enter(); but those lines should probably be let _ = self.0.runtime.enter(); instead.

I'll test and PR this fix (assuming it's the correct fix)

Implement an LMDB-compatible in-memory KV store

In an effort to minimize the dependency footprint of the project, implementing an in-memory key-value store that implements the LMDB API will allow to make lmdb an optional dependency. It will need to support the transaction semantics of LMDB.

Serde Handles in assets keep assets alive

Deserialized Handle/GenericHandle should not keep assets alive, which should be as easy as skipping add/remove ref for them, but they should still support cloning into real references. Should probably remove mutable access to assets so that we don't need to handle users putting valid references into assets.

Make `inventory` optional

inventory (and linkme) do not work on some platforms, and maybe we should prefer manual registration by default. Making inventory optional will also reduce the dependency footprint of the whole project.

SerdeImportable depends on typetag right now, which requires typetag. Might need to fork or switch out typetag to ensure we can still have simple custom assets.

CI tests hang on RpcIO error

Error: 1-11T00:42:06.812][ERROR][atelier_loader::rpc_io] Error connecting RpcIO: Connection refused (os error 61)

This error is preventing tests from completing in CI.

I believe we should:

  • Limit connection attempts in CI or number of retries
  • Attempt to solve underlying issue.

The commands under "Try it out" fail

I'm new to Atelier, so I tried to follow the "Try it out" instructions in the readme but failed:

warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package:   /home/vagrant/atelier-assets/daemon/Cargo.toml
workspace: /home/vagrant/atelier-assets/Cargo.toml
error: `cargo run` could not determine which binary to run. Use the `--bin` option to specify a binary, or the `default-run` manifest key.
available binaries: atelier-daemon, atelier-client, atelier-cli

So I tried specifying I wanted to run atelier-cli:

 $ cargo run --release --bin atelier-client
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package:   /home/vagrant/atelier-assets/daemon/Cargo.toml
workspace: /home/vagrant/atelier-assets/Cargo.toml
    Finished release [optimized] target(s) in 0.22s
     Running `target/release/atelier-client`
thread '<unnamed>' panicked at 'failed to create named pipe: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1084:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
thread '<unnamed>' panicked at 'failed to create named pipe: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1084:5
thread '<unnamed>' panicked at 'failed to create named pipe: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1084:5
thread '<unnamed>' panicked at 'failed to create named pipe: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1084:5
thread '<unnamed>' panicked at 'failed to create named pipe: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1084:5
thread '<unnamed>' panicked at 'failed to create named pipe: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1084:5
thread '<unnamed>' panicked at 'failed to create named pipe: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1084:5
thread '<unnamed>' panicked at 'failed to create named pipe: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1084:5
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Any', src/libcore/result.rs:1084:5

So that's not the right way to do it either. Also tried running atelier-daemon instead:

warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package:   /home/vagrant/atelier-assets/daemon/Cargo.toml
workspace: /home/vagrant/atelier-assets/Cargo.toml
   Compiling openssl-sys v0.9.48
   Compiling alsa-sys v0.1.2
error: failed to run custom build command for `alsa-sys v0.1.2`

Caused by:
  process didn't exit successfully: `/home/vagrant/atelier-assets/target/release/build/alsa-sys-0f3a895d3b79df9d/build-script-build` (exit code: 101)
--- stderr
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "`\"pkg-config\" \"--libs\" \"--cflags\" \"alsa\"` did not exit successfully: exit code: 1\n--- stderr\nPackage alsa was not found in the pkg-config search path.\nPerhaps you should add the directory containing `alsa.pc\'\nto the PKG_CONFIG_PATH environment variable\nNo package \'alsa\' found\n"', src/libcore/result.rs:1084:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.

warning: build failed, waiting for other jobs to finish...
error: build failed

So it seems something is missing in the getting started instructions.

The above is on Ubuntu and master branch, Rust nightly (stable failed some other way I don't have the log from).

[Feature request] : Add a way to clean the cache at start.

When working with importers && asset that are generated with them, the cache needs to be deleted.
Currently, we either have to delete it manually or increment the importer version, but I think that an environment var could be a nice addition to facilitate this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.