emerick42 / kairoi Goto Github PK
View Code? Open in Web Editor NEWKairoi is a Dynamic, Accurate and Scalable Time-based Job Scheduler.
License: MIT License
Kairoi is a Dynamic, Accurate and Scalable Time-based Job Scheduler.
License: MIT License
The idea is to have a HTTP processor, processing jobs by POSTing HTTP requests to configured URLs.
The rule can be configured using a http
runner, with a single configuration parameter: the url
field, containing the target URL to send a POST request (for example http://localhost/route
).
The HTTP request can have the job identifier as its body, in plain text:
POST /route HTTP/1.1
Host: localhost
Content-Type: text/plain
Content-Length: 16
app.domain.job.1
The processor can mark the job as executed
in case of a successful response (code 2xx
), and as failed
in all other cases.
When using Kairoi with clients being on different networks, it is safer to encrypt communication to be sure messages are not intercepted. Currently, the main solution to encrypt data between the server and clients is to create a custom SSH tunnel. Kairoi should provide a native method for securing communications with clients.
We should add TLS support on the communication with clients. This option should be configurable through the configuration.toml
file. It should allow activation and deactivation, but also allow the user to provide paths of the certificates to be used.
In a first version, there is no particular need to be able to verify client certificates, but it can definitely be added if the cost is low.
There are more and more cases (even when developing) where users could benefit from configuring Kairoi at runtime instead of compile time. The configuration of the address and port is the most basic example. Of course, this feature will be required before the first official beta release.
We should implement the loading of a simple configuration file with a few options, along with the documentation skeleton for it.
In order to make Kairoi ready for production, security options need to be added. This is a proposal to allow Kairoi servers to prevent unexpected clients from accessing and modifying data.
A basic solution would be to require an authentication for all clients. A layer could be added in the Controller, verifying the credentials provided by a client at the beginning of a connection. The client would then be in the "authenticated" state for the rest of the TCP connection. The server would need to provide a new "authenticate" instruction, with the received argument being compared to a value configured through the configuration.toml
file. Since this is a layer of the Controller that is above the existing instruction layer, it means every instruction could generate an authentication error.
The type of access control proposed here is global, without the concept of user. Therefore, the value used for authentication can be a simple token. If the concept of Kairoi users happens to make sense in the future, this authentication should be improved at the same time.
Currently, the configuration file is loaded from the directory where Kairoi is started. While this can be convenient for testing purposes, it is also non-standard (making it harder for users to find the configuration source), and it can't be customized. In the Docker prebuilt image for example, it prevents the container from providing a custom configuration while still allowing data to be mounted as a volume.
We should allow for customization on this path.
An idea would be to use a standard Linux configuration path as default (/etc/kairoi
for example). In addition, an environment variable or a command line argument could be added to change this directory, since it can't be changed in the configuration file itself.
We could also use a hierarchy of configuration files, like with XDG_DATA_DIRS, but I'm not sure this would particularly help, especially considering the added complexity, and the fact that complete customization should still be implemented.
In the documentation index, in the Usage section, the link to the "Kairoi Server Configuration Reference" is invalid.
It links to a configuration.toml
file in the documentation. It must instead link to the configuration.md
file in the same directory.
Currently, a job can get locked in the triggered
state in two cases:
executed
or failed
(typically due to a temporary write error on the filesystem),triggered
state.Even users are not able to recreate the job to set it as planned
(this is a good thing, to prevent modifying a job while it's processed).
Ideally, a job shouldn't be able to be triggered
for more than a certain amount of time (being long enough to let the processor do its work, but also the shortest as possible). But it may not be possible to solve this properly this way. Another solution could be to retry the write later in the first case, and specifically act on triggered
jobs at initialization for the second case.
Currently, the logfile generated by the database to store all informations on disk is growing infinitely, as jobs and rules are created or updated (see #2 for details).
We have to add a mechanism to compress the content of the logfile and be able to regularly flush it. The compressed logfile should only contain the most recent log entry for a given entity (as opposed to the live logfile, keeping all entries for all entities). The system must be designed to prevent any data loss when flushing the logfile. It should also have the lowest possible performances impact on creating and updating entities.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.