Giter Club home page Giter Club logo

emergent's Introduction

Emergent, a Visual Testrunner for Rust

The emergent project is an attempt to create a visual testrunner for Rust.

"Visual" in the sense that not only the test-results can be shown but also the output of the tests can be visualized as vector graphics, drawings, animations, and may be even more.

The "vision" of this project is to build a basis for developing applications that are simulatable and testable in their visual and internal representation at any time in any state.

Furthermore, the testrunner should be able to create new testcases by interacting directly with the application under test.

Building & Running Tests

So far emergent is not in a state that it can be used to test other packages besides emergent itself, but if you are curious and up for a rough ride, follow the instructions below to get a first look at what this is about.

Prerequisites

Emergent runs with Vulkan graphic drivers only. On Windows, they are most likely available already, on Linux this article on linuxconfig.org might get you started, and on macOS with Metal support, install the Vulkan SDK for Mac and configure MoltenVK by setting the DYLD_LIBRARY_PATH, VK_LAYER_PATH, and VK_ICD_FILENAMES environment variables as described in Documentation/getting_started_macos.html.

Furthermore, the compilation steps need an LLVM installation. On Linux or on macOS LLVM should be available, on Windows LLVM can be installed with Chocolatey:

choco install llvm

Building & Running Tests

Clone the repository, cd into it, and then check out the submodules with

git submodule update --init

on Windows Ninja is needed to compile shaderc-sys

choco install ninja

and then compile & run emergent with

FORCE_SKIA_BINARIES_DOWNLOAD=1 cargo run

This should - with LLVM installed, and a decent Vulkan driver, and a bit of luck - compile everything, power up the testrunner, and visualize some early results of some of the emergent library test cases.

It does that by starting the testrunner, which starts cargo watch internally, which in turn runs cargo test on the emergent library, captures its results, and visualizes them. From now on, changes are detected and the visualizations are updated automatically.

Plan

My plan is to ...

  • make a graphics library with a GPU backend and high quality perspective anti-aliasing available to the Rust ecosystem. A first attempt is to interface with Google's Skia library. Later, if mature, Pathfinder and Skribo may be used as a replacement.
  • create a decent abstraction library for drawings and layout. While there are modern attempts like Piet, Stretch, and Druid. I feel that the focus of these projects don't fit: Piet is focused on a per platform implementations, which I would like to see unified, Stretch puts all layout under the 2D Flexbox doctrine, which seems rather un-flexible, and Druid combines UI widgets and hierarchy with layout, which makes the layout engine unusable for vector drawings. My goals for a drawing library is a complete, fast, and compact serializable representation with a minimum set of external functional dependencies, like text measurements and path combinators, for example. And for the layout engine, it should be built from one-dimensional combinators and scale up to three or four dimensions while providing a simplified set of combinators to create 2D layouts.
  • create an application component system that looks like a combination of TEA and React. While React focuses on UI components, TEA focuses on having one single application state. I think by layering multiple TEAs, an optimal combination of both worlds is possible. Conceptually, this is probably the hardest part to realize.
  • create an interpolation layer, that enables animations. This should work similar to the DOM diffing algorithms that enable incremental updates, but also produce animations that are independent of layout hierarchies and placement.
  • use or create a gesture recognition library.
  • specify and create text protocol based I/O interfaces and simulators for operating system functionality, so that all desktop and mobile operating systems look similar to the application and interfacing with them does not depend on complex FFI APIs.

All these components are developed very carefully in lock-step with the testrunner. Strictly adhering to the the first principle that a component and all its functionality must be fully visualizable, simulatable, reproducible, and as a result, testable.

History, Context, and Vision

I've had the vision of live programming for a long time and dived deep into languages, frameworks, built countless prototypes, visited conferences, but never felt that I or the live programming community was able to realize what I've imagined.

Years ago while working on a an Visual Studio extension that executed F# code live and rendered the result into the editor, I realized that focusing on live programming - while seemingly motivating at first - is doomed to fail when attempted in isolation.

I now think that live programming does not make sense except for a good demo, because developers spend most of the time refactoring. This is because the creation of new features is the trivial part of programming, but modifying an environment that supports all existing features while enabling new features is the complex part.

The live programming research community answers this problem with creating specifically suited live programming languages or environments, and some of the researchers even created several over the years.

Somehow, all that investment does not seem to lead to solution that is usable. And I think I know why. From my point of view, live programming is merely a by-product of a larger solution to a problem that is much more pressing, and that is live testing.

This project should enable live testing up until the point we can test any imagined aspect of the software in development. The result will be much more than live programming ever attempted, it will be an accessible representation of the application in any state at any time. A timeless god view into the multiverse of the application under test that can be navigated, extended, tested, and compared with previous snapshots.

To realize that, I think we need to push only one recently developed concept a bit further.

Basically it is event sourcing und unidirectional data flow that makes all of it possible, React and Flux were the first popular concepts that tried to map interaction to the input - output model of simple console applications and simplified state handling at the same time. And that lead to The Elm Architecture, which is finally disrupting MVC and puts itself at the pinnacle of application logic design.

If all input to an application can be serialized and the application's state and side-effects captured in full, it is possible to put the application into a sandbox, provide environments to it, and simulate its results in form of its state and visual output.

Of course all that is a rather idealistic goal, but I think that we can learn a lot by just trying.

Copyright & License

(c) 2020 Armin Sander

MIT

emergent's People

Contributors

dependabot-preview[bot] avatar pragmatrix avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

emergent's Issues

Support scrolling areas.

This is the first kind of UI "component" which requires state that is separate from the application.

For that:

  • A component model is needed, so that the application is free of handling view states.
  • Before rendered, the view component's states must be able to be adjusted by the application's render step. (aka. vetoing).
  • The components need to be scoped.

    I considered something in the realm of topo, but the macros are scary and lack IDE support, also I feel that scope should be defined by the application and not related in any way to the call stack, well, even though lambdas will probably be needed?
    Because a rendering context needs to be provided to render only what's visible, extending that with scoping and lookup functionality should not be that hard.

I think this will be one of the most important additions to the UI framework.

It will enable:

  • Extensible and composable UI components including layout containers, etc.

    I don't actually like layout containers, or any hierarchical UIs for that matter, but it seems the easiest way to be specified and in general "computed" compared to a relational model, so I go along with that until something better comes around considering that constraint systems behave somewhat strange in terms of performance and determinism and introduce a level of complexity I don't want anyone to be forced into.

It may enable or at least prepare for:

  • Zooming UI functionality (lazy / on demand rendering, i.e. level of detail).
  • A proper way to define animations.

User Interface Primitives

After thinking weeks about a suitable component model (#50) and taking a look at several of them, I now think that user interfaces don't need a component model, or at least not a predefined one.

It seems to me that each application has different requirements, which a component system often can not foresee.

Component systems are provided for two reasons:

  1. to have abstract names and functionality for things. E.g. a button is an abstraction for a layouted text area with an input area for recognizing touches.
  2. to be able to optimize away in case parts do not change between frames.

But from my experience, a predefined abstraction breaks down a lot sooner than it may be anticipated. E.g. as soon we define a button to contain a layouted text, an application needs a button that contains somethings else, like a custom drawing that changes with the size of the button.

So, I've decided to focus on the second part first and let the actual "components" emerge by building a solid foundation for UI rendering. This includes primitives for

  • frame caching, for example to avoid recomputing layouts.
  • relational dependency graphs, for matrices and coordinate systems, global layout dependencies, or styles maybe (CSS like)?.
  • level of detail management, on demand rendering of details depending on zoom levels.
  • clipping for culling the content in scrolling areas, for example.
  • parallelization for computing text layouts, for example.
  • performance measuring for finding out how to make the UI display faster.

Now seeing that list, I wish I had these primitives available in several UI frameworks before, but they always seemed to be hidden behind the - rigidly feeling - "component system", making it very hard to realize specific requirements with it.

So what I want to attempt is to build a set of primitives with which it is possible to create a component system, but also can be used to create an optimized application specific presentation renderer.

Change the drawing scalar type from f32 to f64.

I've read somewhere that f32 is quite limiting for the representation of larger gaming worlds. Although I don't think that this is usually needed for user interfaces, one could imagine that a large zooming canvas would reach floating point limits sooner than expected.

Not sure how this affects the Skia backend though.

DPI change must cause a re-run of all visible testcases.

... because the current DPI is part of the environment parameterization the Presentations are generated with.

Not recreating the presentation may cause layouting problems when a window is moved from one screen to another one with a different resolution.

Implement simple tap gestures.

Time to get interactive.

Right now, tap gestures are required to be able..

  • to tap on a filename in the compiler message to open the IDE at that point.

    postponed

  • to collapse individual testcases or modules.

The plan:

  • Introduce the concept of area markers in the drawings. These are markers representing drawing rectangles that are computed with the *FastBounds trait family. #31
  • Then a new type called Presentation is used as the default output of a testcase and an application. A Presentation combines drawings, gesture detection requests based on area markers, and event template serialization. #31
  • Introduce a new package emergent-presentation in the subfolder presentation/ (#31):
    • Support event templates. These are the events an application may receive, including holes to fill in additional information. The holes may be filled in by convention, or generic types?

      not needed anymore, see blow.

    • Support for visualizing presentations: This should include the gestures requested, the event templates and the markers involved.

      Basic visualization is implemented, but named areas and scopes must be added.

Issues:

  • If monotonically increasing unique ids are used to represent the markers, how can snapshot testing be supported for the output of individual testcases?

    not needed, &'static strs are used up area identities by introducing namespaces / scopes.

  • Can event templates be strongly typed so that the information the gesture detector may provide be mapped to them? Of course all fields are optional. For example a tap gesture detector may provide taps_count, positions, position (the first one), global_positions screen relative position, etc.
    One idea is to provide a (pure!) function that takes the arguments of the recognizer state. The serializer creates a arbibtrary marker state, calls the function with it, and then replaces the values that survived with references that are filled in later.

    Not using templates anymore, the part of the presentation that cannot be serialized stays inside the WindowApplication wrapper.

  • How can the returned positions be mapped to other elements / drawing areas from within the application?

    Postponed.

Render new output when something changed only.

Because the WindowApplication receives all events winit is sending through, a re-render happens as soon the mouse is moved over the application window.

This can be solved by a scene-graph diff, which is probably needed anyway as soon render caches are in use.

Consider to use serde_tuple for serialization of drawing types.

That would keep the fields accessible without going through getter functions and minimize the serialized footprint at the same time.

https://github.com/kardeiz/serde_tuple

Sidenote: Serialization is not considered compatible between different versions of emergent. This conflicts a bit with my plan to use snapshot testing. So, if the format changes, snapshots would need to be regenerated.

I think we need both, snapshot testing based on the serialized representation of a drawing, and image snapshots for an additional visual comparison. And for both, a visualization of the differences are needed in the testrunner.

Visualize Compilation Results

The testrunner should also be able to visualize compiler errors and warnings to assist for refactorings.

The first step in this direction landed already in master. Using the cargo_metadata crate, cargo and rustc output is converted into CompilerMessages and can be processed from there.

The next step is to support to render text runs and use Skia's text shaping for measuring and layouting UTF8 text.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.