elfo-rs / elfo Goto Github PK
View Code? Open in Web Editor NEWYour next actor system
Your next actor system
What about introducing levels (Verbose
vs Normal
) in dumping to have a more flexible way to filter messages out.
Actors can use tokio::select
, but the elfo_message_handling_time_seconds
metric isn't calculated correctly, which breaks the calculation of utilization. Maybe we need another way to calculate utilization?
Works fine:
[common]
system.logging.targets.hyper.max_level = "Info"
[a]
system.logging.targets.reqwest.max_level = "Info"
Broken:
[common.system.logging.targets]
hyper.max_level = "Info"
[a]
system.logging.targets.reqwest.max_level = "Info"
crossbeam's CachePadded
contains more variants than the elfo-utils
' one. Need to update the code or just use crossbeam-utils directly.
After rewriting the storage most tests are gone, need to rewrite them.
To do describe the problem
An actor containing an infinite loop cannot be detected by elfo_busy_time_seconds
for now, because the metric is updated after polling the actor's future. Thus, we need to have a way to calculate the metric also outside of the polling.
One possible implementation is to have some "global" ThreadLocal<(Addr, Timestamp)>
(thread_local). Before polling the future we can store here the current actor's address and ts. Then, the special actor iterates over it periodically and updates the metric.
However, it requires a way to emit metrics with a custom ActorMeta
. Also requires #8
Sometimes it's useful to produce ticks within some area around the configured period. For instance, when a lot of actors inside a group have the same period to do something heavy (e.g. flushing to DB). It helps to avoid spikes.
We can configure the dumping only for all classes at the same time. Add a way to override settings for specific classes.
elfo_emitted_dumps_total
, elfo_limited_dumps_total
, elfo_lost_dumps_total
, elfo_written_dumps_total
elfo_dumps_usage_bytes
, elfo_dumps_usage_items
elfo_dumps_capacity
(in elements? in parts?)exec
function to provide short traces.error.stack = [...]
in logger.The logger is based on tokio::select
for now. The related issue: #25.
Need to use Stream
to avoid using tokio::select
.
It can be useful to make bold changes in timestamps between lines.
Sources are polled only in ctx.recv().await
call for now.
ctx.request(..).await
doesn't produce any metric for responses. Actually, we produce no special counters for all incoming messages, but it's ok for incoming via the mailbox metrics because there is elfo_handling_time_seconds_count
.
Status changes should be dumped.
ValidateConfig
should be discarded by default.UpdateConfig
should bypass a mailboxUpdateConfig
shouldn't be sent as request.It should highly decrease the time of config updating in case of many actors.
Now they're called in the sender's scope.
Can degrade performance, need to run benchmarks.
The metric is expensive for two reasons:
tokio::task::yield_now()
or for a lot of fast handled messages.histogram
metric, the most expensive of all types.Instead, we can calculate separately elfo_busy_time_seconds_max
, elfo_busy_time_seconds_sum
and elfo_busy_time_seconds_count
. It's enough for most use cases.
Now we're limiting the number of recv()
calls to avoid starving other actors on the same scheduler's thread.
However, there are several ways to improve it:
try_recv()
+ separate recv().await
calls.register_gauge!("test", "a" => "a", "b" => "b");
register_gauge!("test", "b" => "b", "a" => "a");
rendered:
test{actor_group="reporters",a="a",b="b"} 0
test{actor_group="reporters",b="b",a="a"} 0
To do describe the problem
We can use the dynstack crate for storing dumps instead of SmallBox<...>
to avoid extra allocations.
#[message]
enum Impossible {}
We don't need to print panics inside actors, because they are handled by the supervisor. Or we need to print them using the logger.
to do describe later
Remove metrics of finished actors if they have been rendered at least once.
There is no protection against hanging tasks (e.g. due to an infinite loop) in tokio.
One possible way is to have one dedicated CPU for threads with hanging actors. Once a hanging actor is detected, all other tasks should be moved to other threads. Then, we can set the affinity of the thread to the sump CPU. All hanging actors will share the same CPU and rely on OS's scheduler.
Unlikely it's possible to do using tokio's scheduler because of a leak of the appropriate API.
It should be possible to call ctx.send::<SomeRequest>()
to support the "fire and forget" case and have better evolution in distributed systems.
It would be nice to have an ability to define functions as arguments to assert_msg!
macros.
Right now it supports only structures.
The simplest example is the info metric with the version of the service.
For now, it returns only responses.
OpenMetrics defines exemplars and prometheus has supported them already in the experimental mode.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.