Giter Club home page Giter Club logo

xrl's People

Contributors

1011x avatar bytebuddha avatar c4eater avatar cogitri avatar dpmkl avatar ijsnow avatar jakalope avatar ktomsic avatar little-dude avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

xrl's Issues

run xi-core in seperate thread

Id actually like to make xrl::Client a trait and then create an Executable & Proccess struct That could be used instead of Client. As a bonus we could implement most of the logic in the trait definition and only need to implement a send and recv method for each.

Pros:

  • Doesn't require the user to have the xi-core executable.
  • xi-core is relitivily easily updated(recompiled).
  • would allow the user to send serde_json::Values directly
  • you could switch between them easily(only at compile time).

Cons:

  • xi-errors will get sent to stderr instead, they could be collected but thats up to the user.
  • would break with xi updates unless pinned to a particular version.
  • probably more i'm not think of.

I just wanted to get your opinion before continuing, i've been using xi-core as a library for a few months now and it's made dealing with xi-core(while it's constantly changing) a leisurely breeze.

Rework Frontend Trait

Currently as written the 'Frontend' trait has a method for each xi-event. This is Alright now but will get pretty unmanageable as xi adds more methods. I think an enum should be created that represents each xi-core response, we can call it XiEvent for now, that way 'Frontend' can only have one method that takes an instance of XiEvent. This would make the trait easier to implement, also would make adding features to the frontend easier without it being a breaking change everytime xi-core gets a new rpc method.

Since this would be a breaking change i'll wait a while before opening a PR so anyone who might be using the library might have time to see this.

@little-dude since this would be a breaking change i would like it if you could give your thoughts on this

Compiler panics on 'cargo doc'

Since the latest merge, it seems the compiler panics when generating docs.
I know this is probably a rustc error (as proclaimed by the compiler as well) and has nothing to do with the changes made, but I'm not familiar enough with it to properly report it in the rustc repo.

Anyway, this is what I'm getting after running cargo doc:


thread 'rustc' panicked at 'Box<Any>', src/librustc_errors/lib.rs:638:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
error: aborting due to previous error


note: the compiler unexpectedly panicked. this is a bug.

note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports

note: rustc 1.37.0-nightly (991c719a1 2019-06-08) running on x86_64-unknown-linux-gnu

error: Could not document `xrl`.

Caused by:
  process didn't exit successfully: `rustdoc --edition=2018 --crate-name xrl src/lib.rs --color always -o /home/ytm/gits/xrl/target/doc -L dependency=/home/ytm/gits/xrl/target/debug/deps --extern bytes=/home/ytm/gits/xrl/target/debug/deps/libbytes-f3979da2585e2f8c.rmeta --extern futures=/home/ytm/gits/xrl/target/debug/deps/libfutures-4d35d0dfb2cf9c5c.rmeta --extern log=/home/ytm/gits/xrl/target/debug/deps/liblog-57adf34c664418a7.rmeta --extern serde=/home/ytm/gits/xrl/target/debug/deps/libserde-c032e6fc1443ab71.rmeta --extern serde_derive=/home/ytm/gits/xrl/target/debug/deps/libserde_derive-ea462f6a94497eaa.so --extern serde_json=/home/ytm/gits/xrl/target/debug/deps/libserde_json-174790a96bed0ec1.rmeta --extern syntect=/home/ytm/gits/xrl/target/debug/deps/libsyntect-4b9923d54456b828.rmeta --extern tokio=/home/ytm/gits/xrl/target/debug/deps/libtokio-23628aae7aaab09a.rmeta --extern tokio_codec=/home/ytm/gits/xrl/target/debug/deps/libtokio_codec-c5bf1894fd967b98.rmeta --extern tokio_process=/home/ytm/gits/xrl/target/debug/deps/libtokio_process-c3dbf4897bc08d57.rmeta` (exit code: 1)

Incoperating into more frontends

Hey there,

I'm the author of gxi. I've just split up gxi into more crates a bit ago, but then stumbled on this. Is there still interest in this? If so, I'd very much like to help here - a shared implementation would be amazing!

Current problems I see with using xrl in gxi (please do point mistakes on my side out here):

  • gxi is currently structured like this:

image

Everything themed black is the MainWin, which holds EditViews (everything that's white, basically the editing area). The MainWin takes care of the general state: It receives notifications by Xi-editor and sends some of them to the EditView (See https://github.com/Cogitri/gxi/blob/master/src/gxi/src/main_win.rs#L357. Some notifications are handled by the MainWin (e.g. alert/available_themes), others are sent to the respective EditView as set by the view-id that is contained in the message (e.g. update/scroll_to). Both MainWin and all EditViews hold one (refcounted) MainState which sets (as the name suggests) some settings across all views, such as the config, theme etc. About everything in gxi is refcounted due to GTK objects being refcounted too. This means that in Frontend::update I wouldn't need to pass in a &mut self of the MainWin, but instead would use a &self reference of MainWin and get the current EditView (https://github.com/Cogitri/gxi/blob/master/src/gxi/src/main_win.rs#L749).

This doesn't seem to fit too well into xrl's concept of the Frontend Trait.

responses are not handled correctly

Responses are defined as:

#[derive(Serialize, PartialEq, Clone, Debug)]
pub struct Response {
    pub id: u64,
    pub result: Result<Value, Value>,
}

So they are serialized/deserialized as:

{"id": 0, "Ok": "foo"}
{"id": 0, "Err": "bar"}

Whereas they should be serialized/deserialized as:

{"id": 0, "result": "foo"}
{"id": 0, "error": "bar"}

The problem is that as far as I understand we cannot control how Result gets serialized so we'll have to write our own serializer/deserializer, or work around this with a private type that we use only for serialization (this code is taken from https://github.com/paritytech/jsonrpc/blob/12b6210a11e449364d05b85c79187cb583acc5cb/core/src/types/response.rs#L32):

#[derive(Serialize, PartialEq, Clone, Debug)]
pub struct Response {
    pub id: u64,
    pub result: Result<Value, Value>,
}

#[derive(Serialize, PartialEq, Clone, Debug)]
pub struct Success {
    pub id: u64,
    pub result: Value,
}

#[derive(Serialize, PartialEq, Clone, Debug)]
pub struct Failure {
    pub id: u64,
    pub error: Value,
}

#[derive(Debug, PartialEq, Clone, Deserialize, Serialize)]
#[serde(untagged)]
pub enum Output {
	Success(Success),
	Failure(Failure),
}

impl From<Response> for Output {
    fn from(value: Response) -> Self {
        match value.result {
            Ok(result) => Output::Success(Success { id: value.id, result }),
            Err(error) => Output::Failure(Failure { id: value.id, error }),
        }
    }
}

impl From<Output> for Response {
    fn from(value: Output) -> Self {
        match value.result {
            Ok(result) => Output::Success(Success { id: value.id, result }),
            Err(error) => Output::Failure(Failure { id: value.id, error }),
        }
    }
}

The solution with Output works but requires an extra copy of the Value when deserializing, and feels a bit hacky, so I'll give a try to implementing Deserialize and Serialize manually.

Move LineCache from xi-tui here.

I've found reimplementing the LineCache struct from xi-tui a big pain, since any frontend is basically going to use each Update the same way(applying it to there line cache), i think it would make sense to simple include the LineCache into xrl. Btw. I've been keeping an eye on xi-tui for quite some time now(even using it instead of nano!). Thanks for working on it!

Docs?

I'd love to read docs about that :)

Optimize LineCache

Currently the LineCache struct doesn't really simplify that much except applying an update when we receive one from xi. It was copied here from xi-term as is but i think we can do better.

I wont have time to work on this for a while I'm just Noting some stuff here so i can remember whenever i get back to it. But if anyone who stumbles by wants to take a crack at it be my guest PR's are always welcome.

  • track changed lines(lines that need to be re-rendered)
    This is harder than you would think(or i'm an idiot ๐Ÿ˜‰ ), xi update's are seen as a function from old cache state to new, so figuring out exactly when a line number has changed is not as easy as it first seems.
  • Add method to retrieve only changed lines

As i think of more way's to improve the line cache over the next couple month's as i mess with xi-term, ill mark down any other optimizations that could be made here

Update Tokio

I plan to start continuing work soon, and the first thing we should do is try to move away from the 'tokio-io' and 'tokio-core' crates. Tokio has been in a lot of flux lately but i think it's settled down enough to begin updating Xrl. The only tricky part will be the new reactor require's the Send trait be implemented on all futures, even still i should have a PR with this sometime next week.

Derive Clone for update.

Hey cool library!
I am using it to serialize xi types in my own frontend. Is there a reason why Clone isn't derived for Update, Operation, OperationType?

[Q] How to use the linecache

Hello,

It appears that I misuse the linecache (or I somehow discovered a weird bug, which may very well be in gxi though?). I'm not 100% certain how to trigger the behaviour I'm going to describe, but it usually happens after working on a document which is rather big for a while. At some point the linecache just bugs out, not giving me the right lines, which manifests s such stuff:

i: 760 line.text: 
i: 761 line.text:                 for x in line_selections {
i: 762 line.text:                 for x in line_selections {
i: 763 line.text:                     if let Some(cur) = begin_selection {
i: 764 line.text:                     if let Some(cur) = begin_selection {
i: 765 line.text:                     if let Some(cur) = begin_selection {
i: 766 line.text:                         // Make sure to use the lowest value of any selection so it's in the view
i: 767 line.text:                     }

The i is the line number (during drawing), the line.text is the text of the line. The duplicates shouldn't be there. Maybe I'm getting the lines during an update of the linecache? My drawing function is a bit primitive and just looks like this:

        for i in first_line..last_line {
            // Keep track of the starting x position
            if let Some(line) = self.line_cache.lines().get(i as usize) {
                 // draw each line with styles
            }
       }

This happens both with gxi using all of xrl and with xrl only using xrl's linecache (Cogitri/Tau@911c4e3 the commit is a bit broken due to rebasing, but after a tiny bit of fixing I could observe the same behaviour).

Release?

When should we think about another crates.io Release

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.