Giter Club home page Giter Club logo

pico-engine's Introduction

pico-engine

Node version Build Status

An implementation of the pico-engine hosted on node.js

Getting Started / Installing / Configuration

See packages/pico-engine for detailed step-by-step instructions to get started.

Contributing

This section is for those who want to contribute to the pico-engine source code. KRL programmers would be better off following the link in the previous section.

The pico-engine is made up of several smaller modules. Each with their own documentation and test suite.

However they live in this repository in the packages/ directory (mono-repo style using lerna)

  • pico-engine - this is the npm package people install and use
  • pico-engine-core - executes compiled KRL and manages event life-cycle
  • pico-engine-ui - the default UI of pico-engine
  • krl-stdlib - standard library for KRL
  • krl-compiler - compiles AST into a JavaScript module
  • krl-parser - parses KRL to produce an abstract syntax tree (String -> AST)
  • krl-generator - generates KRL from an AST (AST -> String)
  • krl-editor - in browser editor for KRL

To run the pico-engine in development mode do the following:

$ git clone https://github.com/Picolab/pico-engine.git
$ cd pico-engine
$ npm run setup
$ npm start

That will start the server and run the test. npm start is simply an alias for cd packages/pico-engine && npm start

NOTE about dependencies: generally don't use npm i, rather use npm run setup from the root. lerna will link up the packages so when you make changes in one package, it will be used in others.

Working in sub-package

Each sub-package has it's own tests. And the npm start command is wired to watch for file changes and re-run tests when you make changes. For example, to make changes to the parser:

$ cd packages/krl-parser/
$ npm start

NOTE: When running via npm start the PICO_ENGINE_HOME will default to your current directory i.e. your clone of this repository.

Making changes

Use a branch (or fork) to do your work. When you are ready, create a pull request. That way we can review it before merging it into master.

The Pico Labs documentation has a page inviting contributions and giving a step-by-step example, at Pico Engine welcoming your contributions.

Changelog

To view details about versions: CHANGELOG.md

License

MIT

pico-engine's People

Contributors

alexkolson avatar b1conrad avatar burdettadam avatar cambodiancoder avatar cgrimm013 avatar farskipper avatar iso2013 avatar jhuch-cs avatar joshmann35 avatar jsbach-byu avatar kandarej avatar kingkoa avatar lewisthomas avatar michaeledwardblack avatar nmartinezbyu avatar seanag0234 avatar thebatman7 avatar windley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

pico-engine's Issues

OAuth

each pico engine to be multi-tenanted

parser - add a tokenizer step

Currently the parser is parsing a single character at a time. Tokenizing will speed it up greatly. At least to start we need to tokenize whitespace and strings.

Allow POST data to be used for sky event and query

Merge the query and POST data parameters together. Give precedence to the post-data.
i.e. /....?a=1&c=4 in the url and a=3&b=2 in the body the engine would see {a: 3, b: 2, c: 4}

Supported encodings:
application/json
application/x-www-form-urlencoded

Specifications for persistent variables

Would it be useful to be able to create a specification for an entity variable.

The entity var spec would provide a way of declaring the structure of the entity variable.

Event Loop

So from what I remember of the original implementation, the event loop is very specifically built up/controlled/aware of salient rulesets/etc.

I always found this felt a bit.. strange/wrong as compared to a fully asynchronous message passing system, and maybe like it was done out of necessity (performance, etc). So I wanted to open up a discussion here.

In my implementation on the welcomer framework (on akka: fully async message parsing, no 'controlled loop') the only real hurdle I remember coming across was how to nicely 'streamline' things to get a response back for a webservice/similar when you can't necessarily know if everything that might be interested is done with it's processing or not. I can't remember if I ever implemented it, but was thinking along the lines of salient rulesets sending back an "I'm done/not going to respond" type message to handle this case, and a reasonable timeout in case they don't even send that.

The order of events is handled pretty well at the actor level in akka, as each actor has it's own mailbox, which keeps events FIFO (by default), with only one being handled at a time by the actor code itself.

From memory, the way I ended up implementing the main event routing was the main Event Bus, that each pico could subscribe to (with various filters on event domain/type/etc) to create their salience. Given how performant message sending is within akka, any filtering of 'salience' beyond domain/type was handled at the pico level itself (dropping the message if it wasn't interested)

So in summary, is a controlled event loop or a fully async message passing system the better implementation choice going forward, and why.

User-defined modules

Ensure that they work and that a ruleset can use multiple instances of a user-defined module

Supported Event APIs

Drop everything in KRE except

  • /sky/cloud/
  • /sky/event/
  • /sky/flush/
  • /sky/schedule/
  • /oauth/* (maybe not all)

Things to drop

Drop the following:

  1. dispatch block
  2. web actions (could be implemented as library)
  3. javascript generation
  4. callbacks
  5. built-in XDI support (outdated)
  6. persistent iterate, mark, forget, counters
  7. special status for web events
  8. support for 'select using'
  9. authz in meta block
  10. use css and use javascript resources
  11. any library that could be implemented in KRL

Add multivalve declarations/naming

(a, b, c) = (1, 2, 3)

I found several places this would have made Fuse code more concise. Ed has lots of places in CloudOS where this would have made the code easier to read.

Inter-pico Messaging

Currently inter-pico messaging is HTTP based.

Advantages:

  • Web tech is widespread
  • Firewalls open port 80 by default
  • other projects also support this (Web of Things)
  • Lots of other people working on making it fast, etc.
  • addressing is easy (URLs)

Disadvantages:

  • Have to build messaging on top of Web
  • Small-batch HTTP messages have high overhead

Event Store

I think it would be useful for every pico to automatically enable an event store where every event (with attributes) that the pico ever received is stored, in order.

Maybe there ought to be a special persistent var that they are in?

Choosing a Target

[Note: edit this list here and comment on various pros and cons below]

Choices:

  • JVM
  • V8 (or other JS VM)
  • Go

Persistent variable initialization

KRL 1.0 initializes persistent variables to 0 for historic reasons. They should be null if uninitialized so they are falsey. (Unless we have specifications like proposed in #8)

Ruleset names

Taking @farskipper 's lead, I've been naming my KRL rulesets using the technique recommended by Java for package names (see Java spec 7.7). Much of the reasoning there applies to KRL rulesets

Metrics for engine

  • cache hit and miss for RID
  • queue lengths
  • fast path/slow path measurement
  • event loop metrics like rules scheduled, rescheduled
  • rules fired/rules not fired

How should universal rids work?

[Note: edit list here and comment below]

There are several choices:

  • RID is a URL
  • Java-like domain-name-rooted hierarchy
  • Distributed RID ledger

Wrangler ruleset

new engine needs a ruleset with complete wrangler functionality

Package manager

A package manager that sets up a new pico project would be nice.

Command abstractions

KRL has no way to create abstractions of commands in the postlude.

A defcommand would parallel defaction and function. By abstracting commands we'd avoid writing rules and using chaining to do this.

Maybe we don't need this? Is rule abstraction sufficient?

make bootstrap page more robust

it shouldn't claim bootstrap is complete and display links to other pages until the bootstrap operations have successfully completed

Event Expressions

Change the parser so that:

  • domains are not optional
  • drop select using syntax

Repurposing the dispatch block

The dispatch block used to be used for determining which Web site to fire rules on back in the KBX days.

I propose repurposing it as an way for a ruleset to route queries. For example, in Fuse, it got confusing cause you would get trips using the query

v1_fuse_trips/trips

and fuel usages with

v1_fuse_fuel/fillups

This isn't a great API experience cause there's lot of endpoints to remember. Toward the end, I figured I could have solve this by saying that the vehicle ruleset (v1_fuse_vehicle) was the 'primary' ruleset for a vehicle pico and then putting functions in it like the following:

fillups = function(offset, limit) {
    v1_fuse_fuel:fillups(offset, limit)
}

The only place fillups() would be provided would be in v1_fuse_vehicle. The functions in the other rulesets would be private.

This works, but it's a lot of work and impossible for the underlying system to optimize very well.

What if, instead, v1_fuse_vehicle had the following dispatch block:

dispatch {
    query fillups(offset, limit) is v1_fuse_fuel:fillups(offset, limit)
    query trips(offset, limit) is v1_fuse_trips:trips(offset, limit, true)
    ...
}

These functions would be automatically declared and provided. I think we'd require that any module referenced in the dispatch be used in the meta block, but we could discuss that.

Besides allowing optimization, this also puts all the query routing in one place. I used the keyword query so that we could add other things to the dispatch block later.

Note: there's no need to route events cause the pico's internal event bus (the salience graph and scheduler) already routes all events to all rulesets that have an interest in the event.

equality

My first attempt at using the union set operator (in the node KRL) was unsatisfactory because of the way that the node implementation evaluated it.

When ent:children was [ { "x": 1}, { "x": 2} ] and evaluating ent:children.union( [ { "x": 1} ] ) , I expected to have the original value for ent:children but got instead [ { "x": 1}, { "x": 2}, { "x": 1} ]

We are changing the implementation to use a deep equality. But this raises the question of equality in general in the new KRL. Currently == is implemented using JavaScript's ===

The desired behavior for the union operator seems inconsistent with { "x": 1} == { "x": 1} evaluating to false

manage rulesets

ruleset source code versions should be managed using github rather than by the engine. furthermore, a ruleset should be installed in a pico by specifying the location of its source code as a URL

Channel Policies, events with private key signatures and or encryptions

I would like the ability to guarantee that my end point device only has the ability to raise certain events to my pico-engine. My proposed solution is to incorporate public/private key operations. An end point such as my phone would "sign" events with its private key and the pico-engine would "validate" the signature with my phone's public key. Only after the event's signature is validated would the pico-engine process the event. This would allow me to feel safe using events for unlocking doors. If a door required a passcode event attribute, the ability of encrypting the the event with the pico-engine's public key would add protection of the passcode.
Every event has a new time stamp and event identifier, these new attributes will guarantee each signature is unique, and would help prevent reverse engineering private keys.

ECI's are considered secrets which partially provide the security this would extend, but I argue that casual monitoring of networks could easily expose ECI secrets. With signatures an attacker could reconstruct the complete communication semantics of a pico system including ECI's, and have no power to affect it without the pico device's private keys.

What to do about JSONPath

JSONPath is a good idea, but maybe too arcane.

It has the syntax it has because the JSONPath idea already existed.

If we're going to reimplement it, maybe we ought to simplify and combine with the idea of hash paths. JSONPath came along before hash paths. Hash parts aren't as powerful as they could be, but did away with the need for JSONPath in many cases.

One source of inspiration is the Spectre library in Clojure.It uses navigation paths which feel like hash paths.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.