amethyst / distill Goto Github PK
View Code? Open in Web Editor NEWAsset pipeline system for game engines & editor suites.
Asset pipeline system for game engines & editor suites.
The rpc_loader
should support automatically loading an asset's dependency graph before making the load visible to users.
The load_deps
field in the AssetMetadata
already exists, along with get_metadata_with_dependencies
in the RPC server. This should be sufficient to implement the logic in the loader
.
The whole asset graph should be kept uncommitted in the storage until all assets can be committed.
Currently using 0.3, should upgrade to latest to consolidate dependencies with other crates.
bevy_task seems like a better fit for game stuff than tokio due to its flexible design, and async-executor is part of the bevy_task dependency tree.
A big use-case for the asset pipeline will be to compile shaders into a platform-specific format. These shaders are usually written with #include statements to pull in other shader files when compiling. It would be good if we can support this.
I think it would be possible with the following:
This would only impact the import process, so the impact on the whole system will be minimized.
The reason I want build_deps to be correct is that it will be essential for implementing distributed builds in the future. If we know all the inputs required for a build (i.e. our dependency graph is correct), we can send this data to another machine and perform the build there, spreading the significant CPU load of the build process.
Description:
When there are assets that exist under assets/the_asset.txt
, even if the AssetDaemon
has a "txt"
importer, it will not be run unless you touch assets/the_asset.txt
while the daemon is running.
This is a surprise/gotcha in a workflow where new assets are created and saved before running the daemon.
Expected behaviour:
When the AssetDaemon
starts up, it both processes and imports new assets. I think it currently processes them but does not import them.
Impact:
User frustration / confusion.
The Loader currently cannot handle cycles in the dependency graph and will stall.
Decompression should be implemented in the loader and enabled in the asset pipeline for cached assets.
This work should probably be offloaded to a different thread to avoid blocking main thread for larger blobs, or be implemented such that it can keep within a frame budget.
The daemon panics on a lot of common user errors like file parsing errors right now. It should propagate asset import errors to the metadata state so that it can be shown to the user.
The loader should also handle errors better. There are scenarios where it panics, and others where errors lead to infinite retries.
Currently I use scoped_threadpool when dispatching out source file processing. This should probably be replaced with tokio/futures UnorderedStream so that we can remove the dependency.
Currently serde serialises these as sequences of bytes in text formats which is really human-unfriendly. Ideally they should be serialised as uuids in text format and as 16 bytes in binary.
To make it easier to load assets from code, it could be cool to generate AssetUuid constants for each asset based on the asset's path.
SerdeImportable is a really convenient way to define custom assets, but the type identifier is a UUID which can be annoying to use without tooling. It would be great to support type names in addition to UUIDs for identifying types.
I'm not sure if it matters much, but to perhaps reduce the set of dependencies we could either remove or replace time
.
Here's the deprecation issue: time-rs/time#136
The loader and related systems should support an execution time budget that can be configured by users, preferably on the microsecond scale.
By implementing an in-memory transport (just something in-process, like a channel) for RPC (both loader and asset hub server), tokio's IO dependencies can be made optional, greatly reducing the total number of dependencies.
The UDS/named pipes transport in asset_hub_service
is only marginally faster and seems to have a number of platform-specific error cases that I don't want to spend time debugging. We should remove it for now and replace it with a shared memory IPC later.
Currently the main LoadState state machine iterates through all LoadStates every frame. Generally it only needs to iterate over states that may change. I think avoiding processing of Loaded assets will be sufficient, and can probably be achieved by processing only when a ref is added or removed.
While the asset daemon and RPC Loader works well for development environments, we should design distribution scenarios to support our current and upcoming target platforms.
WASM and PC are probably highest priority.
Every time a new version of the metadata is created, the asset daemon's RPC server sends out a new snapshot to every registered listener (atelier_schema::service::asset_hub::listener
). Currently, the rpc_loader
replaces its active snapshot with the new one every time.
To implement hot reloading, the rpc_loader
should check each new snapshot for changed assets and trigger a reload of the modified assets. I'm not sure what should happen with deleted assets, though.
The schema
crate currently runs capnpc
to generate code in build.rs, which incurs a dependency on capnpc for everyone using the project. By moving the contents of build.rs into a separate binary sub-crate that can be run manually, we can make sure capnpc is only necessary when a change to schema
is necessary.
When a graphics device is lost, all assets that were loaded into the driver will be lost, and need to be reloaded from source. Ideally, this would be done synchronously to prevent inconsistent frames where some things are loaded and some are no longer loaded.
Probably need a websocket implementation of the stream transport for this to work.
In my app when updating the loader, I get
thread 'main' panicked at 'there is no reactor running, must be called from the context of a Tokio 1.x runtime', /Users/pmd/.cargo/registry/src/github.com-1ecc6299db9ec823/distill-loader-0.0.2/src/packfile_io.rs:176:13
RpcIO uses the runtime it creates (self.runtime.lock().unwrap()
and passes it to process_requests
) . I think PackfileReader tries to do this by calling self.0.runtime.enter();
but those lines should probably be let _ = self.0.runtime.enter(); instead.
I'll test and PR this fix (assuming it's the correct fix)
In an effort to minimize the dependency footprint of the project, implementing an in-memory key-value store that implements the LMDB API will allow to make lmdb
an optional dependency. It will need to support the transaction semantics of LMDB.
It would be nice with a macro that does something like let uuid: AssetUuid = asset_uuid!("8ce70ab5-e725-4661-bedc-bc66c2c97221")
.
Deserialized Handle/GenericHandle should not keep assets alive, which should be as easy as skipping add/remove ref for them, but they should still support cloning into real references. Should probably remove mutable access to assets so that we don't need to handle users putting valid references into assets.
inventory
(and linkme
) do not work on some platforms, and maybe we should prefer manual registration by default. Making inventory
optional will also reduce the dependency footprint of the whole project.
SerdeImportable
depends on typetag right now, which requires typetag. Might need to fork or switch out typetag to ensure we can still have simple custom assets.
When hot reloading, these steps are performed while the asset is loaded, and thus should not be top-level states.
In preparation of implementing loading of asset graphs in the loader
(#4) , an Importer implementation that supports load dependencies should be written.
It appears the original crate is no longer maintained and mozila is maintaining this fork:
https://github.com/mozilla/lmdb-rs
Currently assets are deserialized on main thread. For larger byte blobs, the deserialization should be dispatched to another thread that doesn't block the frame loop.
Error: 1-11T00:42:06.812][ERROR][atelier_loader::rpc_io] Error connecting RpcIO: Connection refused (os error 61)
This error is preventing tests from completing in CI.
I believe we should:
I'm new to Atelier, so I tried to follow the "Try it out" instructions in the readme but failed:
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package: /home/vagrant/atelier-assets/daemon/Cargo.toml
workspace: /home/vagrant/atelier-assets/Cargo.toml
error: `cargo run` could not determine which binary to run. Use the `--bin` option to specify a binary, or the `default-run` manifest key.
available binaries: atelier-daemon, atelier-client, atelier-cli
So I tried specifying I wanted to run atelier-cli:
$ cargo run --release --bin atelier-client
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package: /home/vagrant/atelier-assets/daemon/Cargo.toml
workspace: /home/vagrant/atelier-assets/Cargo.toml
Finished release [optimized] target(s) in 0.22s
Running `target/release/atelier-client`
thread '<unnamed>' panicked at 'failed to create named pipe: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1084:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
thread '<unnamed>' panicked at 'failed to create named pipe: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1084:5
thread '<unnamed>' panicked at 'failed to create named pipe: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1084:5
thread '<unnamed>' panicked at 'failed to create named pipe: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1084:5
thread '<unnamed>' panicked at 'failed to create named pipe: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1084:5
thread '<unnamed>' panicked at 'failed to create named pipe: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1084:5
thread '<unnamed>' panicked at 'failed to create named pipe: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1084:5
thread '<unnamed>' panicked at 'failed to create named pipe: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1084:5
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Any', src/libcore/result.rs:1084:5
So that's not the right way to do it either. Also tried running atelier-daemon instead:
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package: /home/vagrant/atelier-assets/daemon/Cargo.toml
workspace: /home/vagrant/atelier-assets/Cargo.toml
Compiling openssl-sys v0.9.48
Compiling alsa-sys v0.1.2
error: failed to run custom build command for `alsa-sys v0.1.2`
Caused by:
process didn't exit successfully: `/home/vagrant/atelier-assets/target/release/build/alsa-sys-0f3a895d3b79df9d/build-script-build` (exit code: 101)
--- stderr
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "`\"pkg-config\" \"--libs\" \"--cflags\" \"alsa\"` did not exit successfully: exit code: 1\n--- stderr\nPackage alsa was not found in the pkg-config search path.\nPerhaps you should add the directory containing `alsa.pc\'\nto the PKG_CONFIG_PATH environment variable\nNo package \'alsa\' found\n"', src/libcore/result.rs:1084:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
warning: build failed, waiting for other jobs to finish...
error: build failed
So it seems something is missing in the getting started instructions.
The above is on Ubuntu and master branch, Rust nightly (stable failed some other way I don't have the log from).
May need to clear metadata in DB on startup for assets in folders that are not registered anymore
When working with importers && asset that are generated with them, the cache needs to be deleted.
Currently, we either have to delete it manually or increment the importer version, but I think that an environment var could be a nice addition to facilitate this.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.