Giter Club home page Giter Club logo

axiom's People

Contributors

fenrirwolf avatar icefoxen avatar khionu avatar rsimmonsjr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

axiom's Issues

Implement sending to remote actors.

Currently sending to remote actors is not enabled. Once the system has the ability to connect in the cluster, this ability should be added. The ActorSender enum should be enhanced to allow the message to be serialized to the recipient and decoded at the other side and sent to a Local sender where the actor lives.

Create a macro that improves downcast ergonomics.

Currently in the test cases there is a usage of the downcast function that is very manual and with poor ergonomics.

        fn handle(&mut self, aid: Arc<ActorId>, msg: &Arc<Message>) -> Status {
            dispatch(self, aid.clone(), msg.clone(), &StructActor::handle_op)
                .or_else(|| dispatch(self, aid.clone(), msg.clone(), &StructActor::handle_i32))
                .or_else(|| {
                    dispatch(
                        self,
                        aid.clone(),
                        msg.clone(),
                        move |state: &mut StructActor, aid: Arc<ActorId>, msg: &u8| -> Status {
                            assert_eq!(3, aid.received());
                            assert_eq!(7 as u8, *msg);
                            state.count += *msg as usize;
                            assert_eq!(29 as usize, state.count);
                            Status::Processed
                        },
                    )
                })
                .unwrap()
        }

This is unfortunately necessary due to rust mechanics with Any but could be improved with a macro that would take the state, the aid, the msg and then a list of handler functions (which could be closures and then generate the code above.

Change ActorId send ergonomics.

With the changes from the serialization branch it should be possible now to implement

aid.send(Message::new(x));

Given aid is a an ActorId

Put in a means to allow an actor to monitor another actor.

A monitor is a special actor that receives messages from the system and allows one actor to know about the life status of another actor. One possible implementation is to expand SystemMsg to include the message ActorStopped(Arc<ActorId>) to enable the receiver to know the actor that was shut down. When an actor is monitoring another actor the system will track all monitoring actor ids in the actor and then send the message to all those actors. Of course if the system is hard killed one cannot be sure the monitor message will be received. In later implementations across a network this should take advantage of location transparency.

Investigate possible race condition when scheduling actor in ActorId::send.

Currently the code checks to see if receivable = 1 to determine if it needs to schedule the actor. If the actor has messages already then the dispatcher thread will have put it in the channel. I am wondering if try_send will race the dispatcher threads and an actor with a message not get scheduled. if its in the channel twice that is fine, though not best performant, but that should be preferable to not being in the channel when it has messages. The bad part is if it is out of synch it wont get back in synch because the actor will have more than one receivable message and wont be in the channel. Perhaps an AtomicBool should be used instead to track if the actor is scheduled.

Debug occasional philosphers.rs lockups.

the philosophers.rs example occasionally locks up but it was merged because there were so many pertinent changes in the core code. This continues the work on that example.

Rename Statuses.

To make Status more consistent the variants should have constant cognitive theme.
Processed should be renamed Consume
Skipped should be renamed: Skip

Note that it might entail renaming status as well because its really a verb. Maybe Action

Optimize dispatcher handling for actors that get lots of small messages.

Rather than doing only one message at a time, it would be more efficient if a dispatcher thread would perform work up to a certain configurable time limit. The default might be 1 millisecond If a message takes less than that, the dispatcher thread should handle the next message for the actor, if any, and the next and so on until it gets to the timeout.

While adding this issue the developer will need to add a configurable time_slice to set the time in nanoseconds and then a configurable time_slice_max which will serve as an upper bound for the time a message executes, beyond this level the system should log a warning.

Add capabilities for actors to stop gracefully and brutally.

Currently actors go on forever until the system shuts down and that is obviously not optimal. There need to be the following capabilities integrated.

  • Messaged stop with SystemMsg::Stop that an actor can process to shutdown gracefully.
  • Returning status of Stop from handling a message indicating the system should stop the actor.
  • Calling a ActorSystem::stop() to force the behavior above immediately.

Semantics and limits of TypeId

From #53:

TypeId should not be depended on when compiled on different machines and that is not recommended. When running an Axiom Cluster (after I get that finished) the recommendation will be to deploy the same compiled binary on all nodes.

I want to throw down some thoughts I have about TypeId and see what you think of them, so that I can get them straight in my own head, and hopefully someone will find this useful someday down the line.

First, I'm coming from ROS, which is designed to allow multiple programs to operate together, even written in different languages, and basically implements a message-based RPC system complete with message definition files similar to gRPC or Cap'n Proto. I sure as heck don't want to build something as complicated as that, but there are certain advantages that come out of it that I do kinda do want:

  • You can have external programs that record/replay/produce messages for simulation and testing
  • You can start up a new node that connects to a running system and acts as a debugger to inspect what's going on in some detail

Your recommendation for using the same binary for every node in a cluster makes sense for something scaling horizontally like a web service, but is less convenient for something with lots of asymmetric parts like a robot system. If rebuilding your debugger program requires rebuilding and re-deploying your whole system, and that system is fundamentally stateful, that gets annoying and slow.

So I guess my question is, what can we reliably do with TypeId, and what are the exact constraints? How far can we rely on TypeId::of::<u32>() == whatever being accurate? Obviously if everything is built into one statically-linked binary, all TypeId's will line up with each other. And the TypeId docs say "...it is worth noting that the hashes and ordering will vary between Rust releases. Beware of relying on them inside of your code!", so that's the other extreme. But "same compiler" is pretty easy to guarantee, and the docs don't say a whole lot beyond that. So, will TypeId comparisons be valid if:

  • ...we build multiple binaries as part of the same crate?
  • ...we build multiple dylib's as part of the same crate?
  • ...we build multiple totally independent binaries using the same types (say, a common crate full of type definitions)?
  • ...we build multiple totally independent binaries using the same types, but for different targets? x86_64 vs aarch64, for example.

Investigate possiblility of removing inner Arc in MessageContent.

This is an issue for efficiency and simplification. Currently the MessageContent type looks like the following:

pub enum MessageContent {
    /// The message is a local message.
    Local(Arc<dyn ActorMessage + 'static>),
    /// The message is from remote and has the given hash of a [`std::any::TypeId`] and the
    /// serialized content.
    Remote(Vec<u8>),
}

Note that the local message is holding the content inside an inner Arc. It would be much nicer if we could get rid of the inner Arc if possible because it would reduce complexity and indirection.

Refactor Actor to live in local sender.

Currently the Actor has to be looked up in a hashtable when it is scheduled and that is inefficient and unnecessary. The ActorSender::Local should be refactored to store the actor and avoid the lookup.

Implement tracking and warning for actor message processing.

Actors should track how long their messages take to process and how much time they spend in the channel and use that to warn the user when they are sending messages that take too long to process. The threshold for warning should be added to the configuration object for the actor system

Impl Ord for ActorId?

I'm making a pubsub-ish thing and it would be nice if I could store ActorId's in a deterministic order. PR will come if you want it.

Implement a means to lookup an Actor by its UUID.

The actor system should maintain a map of UUIDs to ActorIds where the key is the id field inside the ActorId. This will enable a user to lookup the actor even when the actor is remote by its ID because UUID v4 values have an incredibly small chance of colliding.

Add graceful shutdown for threads handling remotes.

Currently the threads are just flat terminated. It would be better if they are gracefully shutdown and they inform the remotes that they are shut down. This could be wrapped up in a protocol for system to system communication.

Create configuration struct for the actor system.

Rather than have a mechanism by which the Actor System reads config from a file, I would like to create a struct with the configuration options and allow the user to instantiate this struct however they want when passing to the actor system. There should also be a set of defaults that if the user passes no config structure or only a partially filled structure, the system will configure with defaults.

The preferred way of implementation would be to use the builder pattern as in:

let config = ActorSystem::config().poll_ms(20);

Design question: pattern match on message type?

I'm used to Erlang's pattern of feeding an actor message into a pattern match, and the chains of if let Some(x) = msg.content_as::<Foo>() feels clunky in comparison. I know there's only so much one can do with TypeId but I was wondering if you had any thoughts on whether it would be possible to have a nicer pattern?

Another concern I have that I don't know the answer to is whether TypeId is stable between compiler version, or even multiple builds of the same program. If not, having multiple separate programs in the same cluster could lead to their TypeId's being incompatible...

Add ability for SECC to track how long the message has been in the channel and other timing stats.

It would be potentially useful to know how long a message has been in the channel. SECC should implement that by tracking the difference in microseconds between enqueue and dequeue time in the SeccNode and then rolling those numbers up into an average when a message is received in order to be able to report timing metrics. At the same time any other timing metrics should be explored and implements such as "time waiting for capacity" and "time waiting for messages".

Update Dev Ops structure

  • Allow travis-ci to deploy on tag.
  • Block users from committing directly to master or pushing to master.

Split SECC off into its own crate.

Axiom uses SECC but not the other way around. It should be in its own crate. Once SECC is super stable after being used by Axiom it should be made its own repository.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.