Giter Club home page Giter Club logo

unison's Introduction

The Unison language

Build Status

Overview

Unison is a modern, statically-typed purely functional language with the ability to describe entire distributed systems using a single program. Here's an example of a distributed map-reduce implementation:

-- comments start with `--`
mapReduce loc fn ifEmpty reduce data = match split data with
  Empty          -> ifEmpty
  One a          -> fn a
  Two left right ->
    fl = forkAt loc '(mapReduce loc fn ifEmpty reduce !left)
    fr = forkAt loc '(mapReduce loc fn ifEmpty reduce !right)
    reduce (await fl) (await fr)

This function can be either simulated locally (possibly with faults injected for testing purposes), or run atop a distributed pool of compute. See this article for more in-depth coverage of how to build distributed computing libraries like this.

Other resources:

Building using Stack

If these instructions don't work for you or are incomplete, please file an issue.

The build uses Stack. If you don't already have it installed, follow the install instructions for your platform. (Hint: brew update && brew install stack)

If you have not set up the Haskell toolchain before and are trying to contribute to Unison on an M1 Mac, we have some tips specifically for you.

$ git clone https://github.com/unisonweb/unison.git
$ cd unison
$ stack --version # we'll want to know this version if you run into trouble
$ stack build --fast --test && stack exec unison

To run the Unison Local UI while building from source, you can use the /dev-ui-install.sh script. It will download the latest release of unison-local-ui and put it in the expected location for the unison executable created by stack build. When you start unison, you'll see a url where Unison Local UI is running.

See development.markdown for a list of build commands you'll likely use during development.

Language Server Protocol (LSP)

View Language Server setup instructions here.

Codebase Server

When ucm starts it starts a Codebase web server that is used by the Unison Local UI. It selects a random port and a unique token that must be used when starting the UI to correctly connect to the server.

The port, host and token can all be configured by providing environment variables when starting ucm: UCM_PORT, UCM_HOST, and UCM_TOKEN.

Configuration

See the documentation for configuration here

unison's People

Contributors

anovstrup avatar aryairani avatar atacratic avatar benfradet avatar ceedubs avatar chrispenner avatar dolio avatar ericson2314 avatar galer1us avatar hojberg avatar iamevn avatar jaredly avatar mergify[bot] avatar mitchellwrosen avatar mrdziuban avatar noahhaasis avatar pchiusano avatar rlmark avatar runarorama avatar seagreen avatar sfultong avatar sixfourtwelve avatar steveshogren avatar stew avatar systemfw avatar tmciver avatar tomasmikula avatar tstat avatar unorsk avatar zenhack avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unison's Issues

rename

I was browsing the code and came across this line in ABT.rename:

Var v -> if v == old then var new else var old

Shouldn't the else case be var v?

Investigate/implement IPFS-based implementation of `BlockStore`

See #86 for some context.

IPFS is big project, but the part we are most interested in is the distributed, content-addressed file-system, which could work as a nice backend for BlockStore. It would have the advantage that IPFS-backed Unison nodes would all share a common Universe, which eliminates the hash-syncing step of the distributed evaluation protocol. It also has a solution to the hash distribution problem - common hashes are replicated based on demand across the IPFS network, which can minimize bottlenecks and bring the data closer to where it is needed.

Something I was rather concerned about was this issue, but it looks like the IPFS folks are on the case.

The goal is to investigate and possibly implement an IPFS-based BlockStore. At first glance, IPFS doesn't look like a natural fit for the Series concept that BlockStore has. Perhaps it could be implemented using IPNS, or perhaps we don't rely on IPFS for that aspect of the implementation.

/cc @jbenet - Juan, not sure if you get this notification, but hello! IPFS seems like a great project. We may be building atop it. Also, looking forward to meeting you at Full Stack Fest this year.

Screenshots for the README.md

Please provide some project screenshots, so somebody can see the effect without setting up everything.

A live demo would be fine too.

Implement an efficient process pool for sandboxing evaluation of foreign computations

This is a fun, important, fairly self-contained project that we aren't blocked on right now, and requiring minimal background. If you'd like to get involved in Unison development, read on and see if you'd like to take the lead on an implementation!

This project will be an important component of the distributed systems API. Reading or at least skimming that post is probably good background but isn't strictly necessary.

When a Unison node receives a computation to evaluate from another node (a "foreign computation"), currently we do so in in the same process as the node itself. This is bad for a few reasons:

  • What if the foreign computation is an infinite loop? (Or a computation that provably terminates but which takes longer than the age of the universe to do so...)
  • What if the foreign computation deliberately leaks a huge amount of memory?
  • There's also the concern that the foreign computation will do something evil like delete random files on our filesystem. That level of sandboxing is handled as a separate layer. I'll touch on how this relates to this project a bit later.

Since we don't necessarily trust the foreign computation with the full set of CPU and memory resources available to a Unison node, we need to run foreign computations in some sort of sandbox. Here's the API (subject to tweaking, but this is probably a good start):

module Unison.Runtime.ProcessPool where

import Data.Bytes.Serial
import System.Process (ProcessHandle)

newtype TimeBudget = Seconds !Int
newtype SpaceBudget = Megabytes !Int
type Budget = (TimeBudget, SpaceBudget)
newtype MaxProcesses = MaxProcesses !Int
data Err = TimeExceeded | SpaceExceeded | Killed | InsufficientProcesses
type ID = Int

data Pool a b = Pool {
  -- | Evaluate the thunk in a separate process, with the given budget. 
  -- If there is no available process for that `Budget`, create a new one, 
  -- unless that would exceed the `MaxProcesses` bound, in which case
  -- fail fast with `Left InsufficientProcesses`.
  evaluate :: Budget -> MaxProcesses -> ID -> a -> IO (Either Err b),

  -- | Forcibly kill the process associated with an ID. Any prior `evaluate` for
  -- that `ID` should complete with `Left Killed`.
  kill :: ID -> IO (),

  -- | Shutdown the entire pool. After this completes, no other processes should be running
  shutdown :: IO ()
}

pool :: (Serial a, Serial b) => IO ProcessHandle -> IO (Pool a b)
pool createWorker = _todo

That is the full API. The implementation should be backed by a growable pool of processes. (If Haskell threads could specify a max heap size on startup, we could do everything in-process, but unfortunately, that isn't supported and it doesn't look like it's happening anytime soon.)

Here's a simple sketch of an implementation:

  • When the pool is created, launch a local (in-process) thread. Conceptually, this keeps a couple pieces of state:
    • available :: Map (TimeBudget, SpaceBudget) [ProcessHandle], which is the list of free worker processes ("workers") associated with each budget. We don't literally want to spin up a new OS process every time evaluate gets called.
    • running :: Map (TimeBudget, SpaceBudget) [ProcessHandle], which is the list of processes that are currently running a call to evaluate.
    • ids :: Map ID [ProcessHandle], storing the mapping from ID to processes with that ID.
  • When evaluate gets called, serialize the thunk using the argument passed to pool. Lookup in available to see if there's an existing process configured with that budget, and which happens to be free:
    • If there isn't, check that creating a new process wouldn't exceed the maximum number of processes. If it would, fail fast with a Left InsufficientProcesses. If not, spin up a new process with that budget, add it to the available map and move to the next step.
    • If so, using inter-process communication (or some socket abstraction that uses IPC if on the same machine), send the serialized thunk to that process and wait for the reply, which should be deserialized as a IO (Either Err b).
    • When results come back, we should update the running, active, and ids state accordingly.

Note: Any restriction of privileges other than time / space budgeting will be handled before a call to evaluate. So for instance, if we want to disallow write access to the node's local data store, this would be implemented by inspecting the term, and making sure it cannot reference any such functions. We'll call this a "capability failure" vs a "resource budget failure" caused by a computation exceeding its time or space budget.

The pool is backed by a number of worker processes (or just "workers"). A worker process will be initialized with a CPU and space budget (probably via command line flags), and its main logic will be some a -> IO b:

module Unison.Runtime.Worker where

worker :: (Serial a, Serial b) -> (a -> IO b) -> IO ()
worker eval = ...

main :: IO ()
main = worker _todo

The time budget will be handled internal to the Haskell code, but the memory budget will have to be handled via an RTS flag. It looks like myprog +RTS -M1024m will limit myprog to run with 1024 megabytes. The time budget should be handled internally so that the same worker can be reused, rather than having to spin up a new process every time. It will be quite common to have lots of sequential requests with the same budget.

If you are interested in this project and have questions (or suggestions), please post them here, or come discuss in the chat room.

More durable data structures: OrderedIndex and Queue

OrderedIndex, backend by a bytestring trie, probably with API: empty : Order k -> Remote (OrderedIndex k v), otherwise same as Index.

Queue:

Queue.empty : Remote (Queue a)
Queue.enqueue : a -> Queue a -> Remote Unit
Queue.dequeue : Queue a -> Remote (Optional a)

Depends on #77.

Meta-issue for upcoming work

Overall

Project has so far been focused on building proof-of-concepts, design exploration, and research. There's still some of that to do, but generally, would like to move focus to engineering work to get the language/libraries/runtime of Unison in a usable state for 'real' stuff. This will let us see how well the ideas that have been developed work at scale.

Milestone is having implementation in good enough shape that you could write some highly-available service for your backend in pure Unison. And if you didn't mind being on bleeding edge, this might even be something you put in production, or at least use internally.

Motivating use cases

  • YouTube backend. Should handle fact that some videos have huge demand, others very little. Basically a distributed, elastic, load-balanced, key-value store of video fragments + maybe a node pool for doing encoding on uploads. 95% of the work will be on the generic data structure, which would have lots of uses.
  • Twitter clone backend. Should handle fact that some people have millions of followers, others have very few.
  • Amazon Lambda clone. This is just an elastic node pool. Should be very little code.
  • Later: real-time P2P video app. This makes sense once node protocol converted over to using UDP. Most of the service will be in having a small number of gateway nodes to facilitate UDP hole punching.

Very high level plan:

  • Improve core language and editing experience (data types + pattern matching + command line editing tool is a good MVP). This lets us develop nontrivial libraries in pure Unison, with a pleasant development workflow (no waiting for code compilation + easy refactoring).
  • Improve the runtime, including distributed communication API (basically, make the runtime good enough for 'real' work, not just proof of concept)
  • Build lots of awesome stuff :) See 'Motivating use cases' below.

More detailed plan

  • @pchiusano is working on #104, then will do #59, which unblocks a lot of stuff and will incidentally make the search engine example run at faster than glacier speed. :)
  • @refried is working on #115. Lots of possible stuff to work on once done with that.
  • @sfultong is on #77, moving onto #107 and/or #103
  • @runarorama is on working on #108 (error handling and supervision primitives)

Things for the language and editor to address after that:

  • #112 (improving parser)
  • #106 (data declarations)
  • #105 (text-oriented codebase interface)

At this point, we now have a nice interface to a Unison codebase, a nice parser, and data types. We can add more stuff to the standard library, perhaps #114 (nontrivial distributed libraries), but there are lots of basic utilities to fill in ('obvious' stuff we take for granted from Haskell's standard libraries).

The next big thing is more work on the Unison runtime, which is all blocked on #59 (separate runtime values from syntax tree). Once #59 is done, at least v1, we can do the following:

  • #109 (rewrite Multiplex to support node snapshots)
  • #110 (support contacting nodes outside current container)

Now we have pretty much all the ingredients to try writing some seriously nontrivial stuff. We have a nice editing and refactoring tool, a standard library, the ability to define new data types, and a runtime that isn't completely terrible. So we can start writing Unison code for some of these nontrivial use cases I gave above, like the Twitter backend, the YouTube backend, etc. How awesome will that be?? :)

Other notes:

  • Still some details to be filled in. See the section below which talks about upcoming design work. Some of the questions about key management might have some cascading effects later, but nothing I'm too scared of, so I think it's okay to punt on these things for now.

Upcoming design work:

  • Design work on node lifecycles - when is a node destroyed? what does it mean to destroy a node? (Paul or someone with strong FP design)
  • Design work on persistent data lifecycles - when is persistent data destroyed? (Paul or someone with strong FP design)
  • Design work on node key management (Paul)
    • Perhaps should make encryption API more explicit
      • Crypto.generate-key : Key
      • Crypto.generate-keypair : (Key, SecretKey)
    • Do spawned nodes need a public key? Maybe not:
      • spawn : Key -> Remote Node, will use provided key for encryption-at-rest, transporting
      • transfer : Key -> Node -> Remote ()
      • With this model, node id can just be random guid, fast to generate
      • No forward secrecy with this approach
  • Design work on live upgrades - how to upgrade running system? Also see Erlang for inspiration.
    • General idea: nodes are immutable, don't do hot replacement, just bring up new nodes with new logic, direct traffic to the new nodes
    • May need design for migrating ownership of persistent data, depends on
  • Implement GADTs (Paul, Dan, Arya, or someone w/ type)
  • Node snapshotting - basically, modify Multiplex to contain serializable continuations for all running computations. Lets us suspend a Node at any time if not in use or container is overloaded, and transport a running node between containers. This also gives a pretty good story for how to do live code upgrades to a running system! Just apply the patch to all the the continuations in the Multiplex state.

Track down why MemBlockStore fails to propagate errors properly

In NodeUtil, in the node tests directory, we have this code which loads a node:

makeTestNode :: IO (TestNode, String -> Term V)
makeTestNode = do
  let crypto = C.noop "dummypublickey"
  putStrLn "creating block store..."
  blockStore <- MBS.make' (makeRandomAddress crypto) makeAddress
  putStrLn "created block store, creating Node store..."
  store' <- UBS.make blockStore
  -- store' <- FS.make "blockstore.file"
  putStrLn "created Node store..., building extra builtins"
  extraBuiltins <- EB.makeAPI blockStore crypto
  putStrLn "extra builtins created"
  let makeBuiltins whnf = concat [Builtin.makeBuiltins whnf, extraBuiltins whnf]
  node <- BasicNode.make hash store' makeBuiltins

If ExtraBuiltins contains a type annotation on a Builtin that does not parse, we get an obscure STM error (blocked indefinitely on STM variable). Would be nice to track down why this happens and give better error message!

Rewrite Multiplex to support Node snapshotting

Basically, modify Multiplex to contain serializable continuations for all running computations. This lets us suspend a node at any time if not in use or container is overloaded, and transport a running node between containers. To suspend the node, we just snapshot the state of all callbacks and serialize this to the BlockStore.

This also gives a pretty good story for how to do live code upgrades to a running system! Just apply the patch to all the the continuations in the Multiplex state.

Also lets us nicely handle scheduled tasks with a minimum of resources - container can keep a heap of nodes, along with when they need to be woken up to execute some logic.

This is blocked on #102 and #59. The blockage on #59 is that I want to have a better picture of the runtime representation of Unison programs. This will inform the rewrite of Multiplex.

Probably needs to be worked on by @pchiusano.

Support contacting nodes outside current container

Blocked on #109.

At the moment, nodes can only talk to other nodes in the same container. Obviously, we need to make sure all nodes can talk to each other, even if they're on the other side of the internet.

There's some protocol design here, and there's the implementation.

I'm interested in having the remote inter-node protocol go over UDP. Multiplex already goes like 85% of the way there - it runs everything on a single packet-oriented I/O stream, and is connectionless. Packet delivery just needs to be made reliable + ordered.

Advantage of UDP is it's better for stuff like streaming audio / video (think: writing a video chat app), and we can do UDP hole-punching for NAT traversal.

An idea I had is to have the default ports be derived from the node's ID. So we take the node's id, hash it, and convert that using some deterministic function to a list of ports. This works out nicely if you have a bunch of node containers running behind a NAT.

Doing NAT traversal by default seems reasonable, but there should also be a way to disable this behavior, as people running Unison nodes behind firewalls will often want to have more control about what traffic they allow into the local network.

Provide real implementation of `Cryptography` interface

We are currently looking for folks with some crypto/security background to help with auditing the implementation produced by @tmciver.

See Unison.Cryptography in shared. Implementation of this interface should be added to node, under Unison.Runtime.Cryptography, which should export some sort of function to produce a Cryptography (probably given a Noise keypair).

Provide an actual implementation of this, using cryptonite and cacophony packages.

Noise should be used for the publicKey type, and for establishing encrypted, forward-secret sessions (the implementation of pipeInitiator and pipeResponder).

Background reading: http://noiseprotocol.org/ which describes the Noise protocol framework.

Other notes: work in a branch off of topic/node-container

Naming empty blocks and refactoring sessions

Firstly, this project's ideas are really neat! I find that I often start editing code, only to forgot what I was doing. My idea is the ability to name empty sections of code (I know Unison has a word for placeholders, but I forgot it) and refactoring sessions. That would make things a bit easier!

Do you think this project could collect more money to push it forward if it was written on Scala?

Question really.

Just wondering. Would it be good reason to start writing this project on Scala. It is not that I want sabotage it on any way (or to be a troll) - more just personal wonder. But I guess the author of FP on Scala might show nice example of what might be done in scala, and I believe there will be much more people willing to contribute or argue to/on this project that seems has nice ideas in it anyway, to push it forward. Or Scala is not good enough for that type of project not FP-pure enough maybe.

Implement distributed systems API

This issue tracks work on the distributed systems API. Work on this API and implementation is proceeding on the distributed-evaluation branch branch. @joshcough is leading the work, with help from @pchiusano and whoever else would like to help!

Initial goal: just to get a really simple example running, like a parallel map/reduce across a few Unison nodes. The Unison editor probably won't be ready in time for this, so we'll be hand-crafting Unison terms for this demo, which will be somewhat tedious but fine for now.

Todos (short-term):

  • Complete client implementation. Request callbacks get registered with a CMap Channel (Handler t). We think we probably need to use weak references, since we may decide we don't care about the result
  • Housecleaning for the Remoting module - get rid of all unqualified imports. Switch order of Evaluate t env arguments to Evaluate env t.
  • Revisit packet format (see comment below)
  • Use weak references for Waiting map, or at least think through whether we should do this.

Todos (longer-term):

  • Negotiate missing hashes using a lazily constructed Merkle Tree, a la Git.
    • If there's a very small number of transitive dependencies, may want to just send all these hashes up front to minimize round trips.
  • Support for sandboxing. Beyond just the sandboxing itself, we also want cryptographic verification of access to a sandbox. Simple idea, needing more design: each sandbox is identified by a random token, S. Any computation wishing to run with that sandbox proves it knows that token by sending some random bytes, pad, a timestamp ts, followed by hash(pad ++ ts ++ S). The recipient (who also has knowledge of S) can verify the hash is as expected. The sender must have knowledge of S, and the timestamp prevents replay attacks.
    • If the abs(timestamp - current time) is greater than some small period of time, e, reject. Can also keep a small buffer of the pad bytes received in the last e time, and reject any messages whose pad bytes are a duplicate. This prevents replay attacks even in the period of time e, while allowing for some clock skew between nodes.
  • Evaluate remote expressions in a separate process pool. This is part of sandboxing, a sandbox gets a limited CPU and memory budget, which can only be achieved with process separation (AFAIK, configurable per-thread heaps in GHC yet).
  • Acquire / release TCP connections via a connection pool rather than directly from OS. Even something as simple keeping the connection open for a few seconds after it is released will be a huge efficiency win if nodes are rapidly sending computations back and forth.
  • Encrypt (and optionally authenticate) all inter-node communication

Worklog:

  • 8/12/15 - Pairing session, full round trip working (request sent to server, evaluated, and sent back to real client).
  • 7/29/15 - Pairing session, partial round trip working (request being sent to server, evaluated, and sent back to dummy client).
  • 7/23/15 - Another pairing session, we were pretty close to having a round trip working and have a very good sense of how everything fits together. We also introduced a class, Evaluate, that lets us be ignorant of Unison details in the implementation and focus on the high-level protocol. Also nice for testing!
  • 7/22/15 - @joshcough and @pchiusano met up for some pair programming to get this started. The sessionless nature of the protocol really tripped us up at first. We never keep a TCP connection open for any length of time, it's just open long enough to receive the request, then is closed immediately. The request contains information about where to send the response. I think we need one or two more pairing sessions to fully grok how everything will fit together. After that, there may be opportunities for others to help out, and we'll organize a list of subprojects that could potentially be worked on by others.

Parser improvements: support layout blocks and make better use of commit

Self-contained task: modify parser to implement layout blocks for let, let rec, do, etc. Parser library now supports user state, so should be able to port Ed Kmett's combinators. Might be simpler to just do this as a preprocessing pass that literally just inserts braces / semicolons.

(@refried or anyone familiar with functional parsing)

Minor thing: incorporate line numbers into error messages. All the info is there in parse state to support this.

Minor thing: audit grammar to use commit to avoid needless backtracking and give better parse errors.

This could be worked on now, work off of topic/searchengine.

Doesn't work on ubuntu 14.04 (ghc 7.6.3)

It got surprisingly far. I couldn't cabal sandbox init (cabal is at 1.6 and sandbox requires 1.8) but cabal install seemed to install all the dependencies before dying at:

Configuring unison-0.1...
Warning: This package indirectly depends on multiple versions of the same
package. This is highly likely to cause a compile failure.
package regex-base-0.93.2 requires mtl-2.1.2
package unison-0.1 requires mtl-2.2.1
package scotty-0.9.1 requires mtl-2.2.1
package resourcet-1.1.4.1 requires mtl-2.2.1
package exceptions-0.8.0.2 requires mtl-2.2.1
package bytes-0.15 requires mtl-2.2.1
package aeson-0.7.0.6 requires mtl-2.2.1
package mtl-2.1.2 requires transformers-0.3.0.0
package wai-extra-3.0.7.1 requires transformers-0.4.3.0
package unison-0.1 requires transformers-0.4.3.0
package transformers-compat-0.4.0.4 requires transformers-0.4.3.0
package transformers-base-0.4.4 requires transformers-0.4.3.0
package streaming-commons-0.1.12 requires transformers-0.4.3.0
package scotty-0.9.1 requires transformers-0.4.3.0
package resourcet-1.1.4.1 requires transformers-0.4.3.0
package mtl-2.2.1 requires transformers-0.4.3.0
package monad-control-1.0.0.4 requires transformers-0.4.3.0
package mmorph-1.0.4 requires transformers-0.4.3.0
package exceptions-0.8.0.2 requires transformers-0.4.3.0
package bytes-0.15 requires transformers-0.4.3.0
Building unison-0.1...
Preprocessing library unison-0.1...

src/Unison/ABT.hs:7:14: Unsupported extension: PatternSynonyms
Failed to install unison-0.1
cabal: Error: some packages failed to install:
unison-0.1 failed during the building phase. The exception was:
ExitFailure 1

Anyways, just FYI. I'm not really surprised, considering that you're installing half the haskell eco-system :) Anything more than the square root and you're screwed. (I haven't even upgraded to C++11 yet, and my projects try to have no dependences, that's my aesthetic.)

Stream processing library for Unison

Effectful streams are a ubiquitous abstraction that will get used by lots of Unison programs. We want a nice, lightweight, efficient stream type that's well-integrated into Unison. Here's the Unison API:

data Stream f a

instance MonadPlus (Stream f) -- Unison does not have typeclasses but you get the idea

eval :: f a -> Stream f a

uncons :: Stream f a -> Stream f (Maybe (a, Stream f a)) 

run :: Stream Remote! a -> Remote (Vector a)

fromVector :: Vector a -> Stream f a

It is expected that <|> (which is Stream append) and >>= (which is the same idea as the list monad) take constant time, rather than needing to traverse the left-hand expression.

I expect that Stream Remote a values will be very common - it's a stream that might pull data from multiple Unison nodes.

The implementation will be based on type-aligned sequences, following FS2:

data Stream f a = Stream (forall b . Stack f a b -> Stack f a b)

data Stack f a b where
  Empty :: Stack f a a
  ConsBind :: (a -> Stream f b) -> Stack f b c -> Stack f a c 
  ConsEmit :: a -> Stack f a b -> Stack f a b
  ConsEval :: f a -> Stack f a b -> Stack f a b
  ConsAppend :: Stream f a -> Stack f a b -> Stack f a b

-- left as an exercise: implement `MonadPlus`, `eval`, `uncons`, and `run`

But we will have to defunctionalize this representation, so the functions are not Haskell functions but Unison functions, similar to what was done for Unison.Remote.

Subtasks:

  • As an exercise to check for understanding, implement pure Haskell version of Stream as given above. Post a compiling gist for reference.
  • Add defunctionalized version of Stream to Unison.Stream (open question - should we just add these as another constructor to Unison.Term, like we did for Remote? Or possibly bite the bullet and split out a separate Value type for representing runtime values, and add it there.)
  • Integrate into Unison builtins so these functions can be used from Unison

Docker build instructions fail

The docker build instructions fail with on Mac OS X

Step 6 : ADD editor/stack.yaml /opt/unison/editor/stack.yaml
lstat editor/stack.yaml: no such file or directory

It can be fixed by copying the editor.yaml in the root directory to editor/stack.yaml

Build on HalVM

HalVM run Haskell on Xen without an OS. Since unison is all about leaving the Unix world behind, it would be nice if the node could be built on this.

Remaining todos for topic/node-container

Somewhat of a brain dump, working todo list for misc stuff that needs to be done on topic/node-container branch. I'll keep this updated.

  • Implement a main for NodeWorker that picks implementations of all its dependencies.
    • Create Protocol implementation
  • Come up with some sort of way for the container to actually run some Unison code, so we can submit 'distributed' programs to it. Okay if a hack for now, we just want something to get us running.
    • Hack is that container will run an HTTP server which can be used to submit programs to a node in the container. But you don't need to create the node ahead of time, it just springs into existence when messaged. Possible only because we aren't actually doing any crypto yet.
  • Implement a main for NodeContainer that picks implementations of all its dependencies.
  • Hook parser up with access to all needed builtins to do interesting stuff
  • Make sure node worker process shuts down cleanly after period of inactivity
  • Figure out why root node of the computation isn't ever shut down
  • Fix let rec interpretation
  • Fix any bugs that come up when trying to submit programs to the container
  • Improve let rec generalization, so can submit something like a full module with polymorphic functions to the container for evaluation

Investigate more flexible access models for persistent storage

Currently, persistent storage is tied to an originating node for both reads and writes. For instance, in the following program, the lookup call on the last line will occur on the n1 node, despite the fact that the surrounding computation is on n2 at that point:

Remote {
  n1 := Remote.spawn;
  n2 := Remote.spawn;
  ind := Remote {
    -- Remote.transfer : Node -> Remote Unit
    Remote.transfer n1;
    ind := Index.empty;
    Index.insert "Unison" "Rulez!!!1" ind;
    pure ind;
  };
  Remote.transfer n2;
  -- this will contact `n1` and do the lookup there
  Index.lookup "Unison" ind;
}

This works and is correct, but has the disadvantage that all reads route through the originating node, making that node a bottleneck. For read-heavy workloads, we can imagine relaxing this constraint and letting nodes with the same storage universe as the originating node just issue the query directly, without needing to route through the originating node. (For writes, I think writes should continue to always route through the originating node - we don't want shared, distributed mutable state - users should build higher-level abstractions in pure Unison for this sort of thing)

Some remarks with this approach:

  • We'd need to include the Universe as part of the runtime representation of an Index or any other persisted type.
  • There are some questions around encryption. At the moment, with this API, we are assuming encryption is transparent, done with some key derived from the node's private key. If we want to allow multiple nodes to do reads, we need to get them the key somehow, or just not encrypt the data.
    • One idea is to make key management and encryption more explicit in the API. So, it's not empty : Remote (Index k v), it's empty : Key -> Remote (Index k v) and lookup : Key -> k -> Index k v -> Remote (Optional v).
    • This is probably a good idea. More explicit, and we can provide common patterns just using pure Unison code.
  • There are questions around sandboxing. I've been thinking that sandboxes would certainly include control over what persistent data may be accessed and/or written. But since the sandbox is currently tied to the node, how do we enforce the sandboxing policy when other nodes may be issuing the queries? We may need to handle 'non-public' persistent data differently than persistent data that is 'world-readable'.

Separate runtime values from syntax tree

Blocked on #102

Currently, we directly interpret the term syntax tree. This is both ugly (forces us to pollute the syntax tree with stuff we really only care about at runtime) and inefficient as hell (means we are doing ABT operations and rebuilding trees at runtime).

Leaving this as a reminder to do something about this.

Some open questions, remarks:

  • Is it necessary to preserve the ability to reify values back into terms? If that's not needed, this becomes much easier. If so, then runtime values need to track additional information needed to be able to reconstruct terms, and this information must be preserved even if there are further levels of compilation happening.
  • Note: just because we track the information needed to reconstruct terms doesn't mean we have to compute with the term representation.
  • In the editor, we have the ability to step (link + beta reduce) and/or evaluate a subexpression. This replaces the selection with the evaluated result. To support this functionality requires either the ability to reify runtime values, or use of a separate code path for editor evaluation. It probably needs some sort of separate code path since the selection can contain free variables and the runtime is unlikely to be able to handle evaluation of expressions with free variables. (Or is this a bad assumption?)
  • When Unison eventually becomes dependently typed, ability to reify runtime values might be a huge performance boost during typechecking, which may have to perform normalization of terms. Todo: look into what other DT language implementations do.

Test, cleanup, and document project build instructions

I'd like to get a few things cleaned up with the new build:

  • Add an install-nix.sh script to this repo for installing nix. (Crib from try-reflex)
  • Add a vagrant setup file to this repo for setting up a valid linux dev box for this project. Use this box for testing the build on linux, and it'll also be useful if there's any devs on Windows who want to work on the project.
  • Once we have these, and assuming everything works:
    • Delete the existing SETUP.sh and shared sandboxes stuff.
    • Update README with build instructions.

Build instructions shouldn't assume any knowledge of nix, cabal, or even Haskell. A monkey should be able to follow the instructions and know at the end whether they have a working setup. For example:

$ git clone https://github.com/unisonweb/platform.git unison
$ cd unison
$ # if you don't already have Nix package manager installed
$ ./nix-install.sh
# you'll be prompted to install nix package manager
$ cd node
$ nix-shell
# wait a while
$ cabal configure
$ cabal test
... # if all goes well
Test suite tests: RUNNING...
Test suite tests: PASS
Test suite logged to: dist/test/unison-node-0.1-tests.log
1 of 1 test suites (1 of 1 test cases) passed.

@Ericson2314 I can take care of the README updates, but do you think you could test out the build on Linux inside a Vagrant environment? Assuming that works, we can check in the vagrantfile.

Possible to use 7.10.3?

Hi Paul,

the automated build on Docker Hub takes too long and is killed after 2h.
The situation would improve if we could skip the the stack setup steps which consume a lot of time.
Unfortunately, I didn't find a Docker container that contains the required versions of GHC and GHCJS so we would skip the stack setups.
We could at least skipped the first stack setup if Unison works under 7.10.3 because that's the first version for which the official Docker image also constains Stack.
Have you already tried building with 7.10.3? Would it be possible to switch?

Possibly modify `Reference` to contain address for `Derived`

Currently, Reference can be either a Builtin Text or a Derived Hash. I'd like to modify this to Derived Hash addr, so:

data Reference addr = Builtin Text | Derived Hash addr

The idea here is that addr will be the same as (or will contain) the addr type used by BlockStore addr - so a Reference tells you how to obtain the actual source of the definition from the BlockStore, and we can verify the Unison hash is what we expect when reading from the BlockStore.

Without this change, we would need to somehow maintain a (potentially very large) mapping from Unison hashes to the addr type used by the BlockStore addr or else assume that the BlockStore is indexed by Unison hash.

This will be a somewhat wide-ranging, but mechanical refactoring, since lots of things use Reference and we will need to propagate the type parameter.

Some questions around how this works with distributed programming API:

  • Do we have to rewrite the incoming Reference values that get synced?
  • OR do we just let addr be like a (Node, addr) pair, or a (Node, Universe, addr) triple, so the recipient (and any recipient it forwards the computation to) always has all the information they need to be able to sync that hash?

This second option seems pretty elegant, and it solves a problem I've been wondering about - where do we get a hash from if the immediate sender doesn't have it? (Which seems like it can occur since nodes can just forward a computation along to another node without needing all the definitions.) With this scheme, the node would just fetch the hash from the mentioned node, or any other node in the same universe (sharing the same BlockStore).

Building blocks for making node runtime agnostic to underlying storage layer: `BlockStore` and `Namespace`

Currently, we have some of the node runtime relying on access to the local filesystem to persist some of its state. We'd like to move away from this and make the storage backends pluggable. For that we need a couple interfaces which I'm calling BlockStore (an immutable, content-addressed storage layer in which blocks are addressed by a hash of their content) and Namespace (a mutable name service in which the values associated with a key may be altered given a proper signature).

Node containers as discussed in this post will use these APIs for storing persistent state of the container and of individual nodes in the container.

Here is a sketch of the APIs, which can just be added to unison-shared.

module BlockStore where

import Data.ByteString (ByteString)

newtype Hash = Hash ByteString

-- | Represents an immutable content-addressed storage layer. We can insert
-- some bytes, getting back a `Hash` value that can be used for `lookup`.
-- Given `bs : BlockStore` and `bytes: ByteString`:
--   `insert bs bytes >>= \h -> lookup bs h` should result in `Just bytes`.
-- also `insert bs bytes >> insert bs bytes` is equivalent to `insert bs bytes`, requiring
-- that the returned hash be purely a hash of the content of `bytes`
data BlockStore = BlockStore {
  insert :: ByteString -> IO Hash,
  lookup :: Hash -> IO (Maybe ByteString),
  -- proposal for allowing GC of old blocks; can allocate a series
  insertSeries :: ByteString -> IO (Hash, Series),
  -- this fails if the hash does not match the last written block for this series
  -- but if it succeeds, the input hash is marked as garbage and can be deleted
  update :: Series -> Hash -> ByteString -> IO Hash
}

Whenever you need to persist some values, you call insert on the block store, and use the returned Hash as the handle to that persisted data. delete is deliberately missing for now.

For implementing mutable, persistent data structures like a key-value store, we need the ability to update what hash is considered 'current' (similar model to git, where we have an immutable DAG of hashes, and branch names are mutable pointers to a hash). For this we use a Unison.Namespace:

module Unison.Namespace where

newtype Name = Name ByteString
newtype Nonce = Nonce Word64

newtype Authorization = Authorization { nonce :: Nonce, bytes :: ByteString }

-- | A mutable namespace of hashes. 
data Namespace key fingerprint h = Namespace {
  -- | Associate the given name with the provided key, returning a hash of key
  declare :: key -> Name -> IO fingerprint,
  -- | Resolve a fingerprint + name to a hash
  resolve :: fingerprint -> Name -> IO (Maybe (h, Nonce)),
 -- | Requires proof of knowledge of the key passed to `declare`
  publish :: fingerprint -> Name -> Authorization -> h -> IO (Maybe Nonce)
}

The idea here is that the owner of the key passed to declare may update the value associated with that name, but the Namespace store enforces strictly sequential updates to a particular name by requiring that the authorization passed to publish incorporate a nonce which is updated after each publish. Thus multiple threads may contend to publish, and the updates are forced to be linearized, similar to STM.

The usual implementation of this interface would be for key to be a public key, fingerprint to be a hash of that public key, and Authorization to be a signature (using the corresponding private key) of the nonce plus the new value h plus perhaps some padding, etc. But the way the interface is written, we can also have key be a private key, fingerprint be a hash of that private key, and Authorization bytes be a hash of (nonce, private key, new hash).

Subtasks:

  • Sanity check: make sure we can implement key-value store atop these interfaces, possibly make adjustments to the interfaces accordingly
  • Implement these interfaces in unison-shared
  • Provide in-memory implementations of both these interfaces, Unison.BlockStore.MemoryBacked and Unison.Namespace.MemoryBacked, in unison-shared project.
  • Provide file-backed implementation of both these interfaces, Unison.BlockStore.FileBacked and Unison.Namespace.FileBacked
  • Optional: provide combinators for adding caching to a BlockStore or a Namespace
  • Implement acid-state-like library parameterized on a BlockStore and Namespace
  • Convert key-value store implementation to use this acid-state-like library. A big benefit here is that we have more control over what is actually persisted and what lives in memory. For instance, we can have just the keys live in memory

Compile errors in TermSearchboxParser.hs during editor build

I hit the following compile errors when running stack --stack-yaml editor.yaml build.

It looks like they might have been introduced in #70 and I guess not fixed by #71 but I'm not sure. Paging @refried anyway if he can take a look?

...
unison-editor-0.1: build
Completed 28 action(s).

--  While building package unison-editor-0.1 using:
      /home/vagrant/.stack/setup-exe-cache/x86_64-linux/setup-Simple-Cabal-1.22.4.0-ghcjs-0.2.0.20151001_ghc-7.10.2 --builddir=.stack-work/dist/x86_64-linux/Cabal-1.22.4.0_ghcjs build lib:unison-editor exe:editor --ghc-options " -ddump-hi -ddump-to-file"
    Process exited with code: ExitFailure 1
    Logs have been written to: /vagrant/.stack-work/logs/unison-editor-0.1.log

    Configuring unison-editor-0.1...
    Preprocessing library unison-editor-0.1...
    [1 of 8] Compiling Unison.TermSearchboxParser ( src/Unison/TermSearchboxParser.hs, .stack-work/dist/x86_64-linux/Cabal-1.22.4.0_ghcjs/build/Unison/TermSearchboxParser.js_o )

    /vagrant/editor/src/Unison/TermSearchboxParser.hs:34:10:
        Couldn't match expected type ‘Parser String’
                    with actual type ‘(Char -> Bool) -> Parser String’
        Probable cause: ‘takeWhile’ is applied to too few arguments
        In the expression: takeWhile Char.isDigit
        In an equation for ‘digits’: digits = takeWhile Char.isDigit

    /vagrant/editor/src/Unison/TermSearchboxParser.hs:34:20:
        Couldn't match type ‘Char -> Bool’ with ‘[Char]’
        Expected type: String
          Actual type: Char -> Bool
        Probable cause: ‘Char.isDigit’ is applied to too few arguments
        In the first argument of ‘takeWhile’, namely ‘Char.isDigit’
        In the expression: takeWhile Char.isDigit

    /vagrant/editor/src/Unison/TermSearchboxParser.hs:46:29:
        Couldn't match expected type ‘Parser String’
                    with actual type ‘(Char -> Bool) -> Parser String’
        Probable cause: ‘takeWhile’ is applied to too few arguments
        In the second argument of ‘(*>)’, namely
          ‘takeWhile (\ c -> c /= '"')’
        In the first argument of ‘(<*)’, namely
          ‘char '"' *> takeWhile (\ c -> c /= '"')’

    /vagrant/editor/src/Unison/TermSearchboxParser.hs:46:40:
        Couldn't match expected type ‘Char -> Bool’
                    with actual type ‘[Char]’
        The lambda expression ‘\ c -> c /= '"'’ has one argument,
        but its type ‘String’ has none
        In the first argument of ‘takeWhile’, namely ‘(\ c -> c /= '"')’
        In the second argument of ‘(*>)’, namely
          ‘takeWhile (\ c -> c /= '"')’

Design for error handling and supervision primitives

For distributed Unison programs, we need a nice story for doing error handling and supervision. It's not as simple as adding try to the Remote effect:

try : Remote a -> Remote (Either Err a)
fail : Err -> Remote a

Reason this is insufficient is consider a program like this:

do Remote
  a := foo
  Remote.transfer alice
  b := bar
  Remote.transfer bob
  c := baz
  Remote.transfer carol
  d := qux
  Remote.pure (f a b c d)

If the bob node is running on a machine that gets hit by an asteroid or ends up isolated during a network partition, neither alice nor carol can be made aware of this fact. Alice could in theory wait until Bob replies that he's done, but then Remote.transfer is not 'tail recursive', which is no good.

It seems like we'll need a separate concept of supervision, so an entire Remote computation can be monitored by another node. During supervision, pings will be sent to that node and a best-effort will be made to notify the supervisor of failures. If the supervisor stops getting pings or receives an error, it can respond in some way.

Good background: read about how Erlang does it

This is pretty open-ended design work. Am very interested to see what we can come up with!

Also, this would be a good thing to do a blog post on once we have a design we like.

Editor eval feature

Currently, it is not possible to evaluate expressions in the editor.
This issue is opened to keep track of if someone is working on it.

@pchiusano mentioned on gitter that it should be easy to implement and that he had a few other features for the editor in mind -- I suggest opening issues for these additional editor features.

Investigate/implement SQLite-based or other BlockStore implementation for Windows

At the moment, we do not have a fast, low-memory, BlockStore implementation that runs on all platforms. The current implementations have the following limitations:

  • MemBlockStore is totally ephemeral - it's state doesn't survive node container shutdown. Useful mainly for testing and for use by a standalone JS editor.
  • FileBlockStore uses acid-state, so it persists its state, but keeps the entire store in memory. It also is apparently somewhat slow? I'm guessing the performance could be improved with tuning of how we use acid-state, but keeping the entire store in memory is not going to work long-term.
  • LevelDbStore uses LevelDB, which works nicely but is painful to build on windows. We want people to be able to easily run Unison nodes on their windows machines, too, so that is a problem (unless we can figure out a good story for building on windows).

So, this task is to investigate a SQLite-backed implementation of BlockStore. SQLite runs on pretty much everything (Windows phone 8 anyone??) and I'm guessing would be easy to integrate into our build. For implementation strategy, it seems like it would work to represent the BlockStore with one two-column table, with columns (hash, blob), with an index on the hash column. Then one more table for tracking the series - (seriesid, hash), with an index on seriesid.

If there's another good embedded key-value store option on Windows, that would be fine also.

There seem to be a couple sqlite bindings on hackage. Not sure which is best.

For questions, @sfultong is something of an expert on BlockStore (he produced the three existing implementations). I'm also available for general support. @sfultong if you decide you'd like to implement this, feel free to claim it - just assign it to yourself. But this might be a good project for someone else to get involved with as a way to get their feet wet with the project.

One additional requirement - the implementation strategy should be obviously safe against SQL injection attacks. No manually gluing raw strings together to construct queries, obviously. Tests should include some 'Bobby tables' style queries.

Build issues

Sorry to be that guy, but I haven't been able to build unison using either the docker method

--  While building package ghcjs-0.2.0.20151001 using:
      /root/.stack/programs/x86_64-linux/ghcjs-0.2.0.20151001_ghc-7.10.2/src/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup --builddir=.stack-work/dist/x86_64-linux/Cabal-1.22.4.0 build lib:ghcjs exe:ghcjs exe:ghcjs-boot exe:ghcjs-pkg exe:ghcjs-run exe:haddock-ghcjs exe:hsc2hs-ghcjs --ghc-options " -ddump-hi -ddump-to-file"
    Process exited with code: ExitFailure (-9) (THIS MAY INDICATE OUT OF MEMORY)

or stack (even with the help of the Dockerfile & some fiddling with lts versions per some other Q&A on the web).

stack build unison-node completed fine, and scotty is running.

13:14 lpatliclai25f:~/opensource/unison (master)$ cd editor
13:15 lpatliclai25f:~/opensource/unison/editor (master)$ stack build
Warning: /Users/arya/opensource/unison/editor/stack.yaml: Unrecognized fields in ProjectAndConfigMonoid: compiler, compiler-check, setup-info
Warning: File listed in unison-editor.cabal file does not exist: .gitignore
Warning: File listed in unison-editor.cabal file does not exist: README.markdown
Warning: File listed in unison-editor.cabal file does not exist: CHANGELOG.markdown
While constructing the BuildPlan the following exceptions were encountered:

--  While attempting to add dependency,
    Could not find package ghcjs-base in known packages

--  Failure when adding dependencies:    
      webkitgtk3: needed (>=0.14.0.0 && <0.15), latest is 0.14.1.1, but not present in build plan
    needed for package: ghcjs-dom-0.2.2.0

--  Failure when adding dependencies:    
      ghcjs-dom: needed (>=0.2.1 && <0.3), latest is 0.2.3.1, but couldn't resolve its dependencies
      webkitgtk3: needed (==0.14.*), latest is 0.14.1.1, but not present in build plan
      webkitgtk3-javascriptcore: needed (==0.13.*), latest is 0.13.1.1, but not present in build plan
    needed for package: reflex-dom-0.2

--  Failure when adding dependencies:    
      ghcjs-base: needed (-any), but not present in build plan
      ghcjs-dom: needed (-any), latest is 0.2.3.1, but couldn't resolve its dependencies
      reflex-dom: needed (-any), latest is 0.3, but couldn't resolve its dependencies
    needed for package: unison-editor-0.1

--  While attempting to add dependency,
    Could not find package webkitgtk3 in known packages

--  While attempting to add dependency,
    Could not find package webkitgtk3-javascriptcore in known packages

Recommended action: try adding the following to your extra-deps in /Users/arya/opensource/unison/editor/stack.yaml
- webkitgtk3-0.14.1.1
- webkitgtk3-javascriptcore-0.13.1.1

You may also want to try the 'stack solver' command

(adding those two extra-deps didn't help)

13:18 lpatliclai25f:~/opensource/unison/editor (master)$ stack solver
Warning: /Users/arya/opensource/unison/editor/stack.yaml: Unrecognized fields in ProjectAndConfigMonoid: compiler, compiler-check, setup-info
This command is not guaranteed to give you a perfect build plan
It's possible that even with the changes generated below, you will still need to do some manual tweaking
Asking cabal to calculate a build plan, please wait
Running /usr/local/bin/cabal --config-file=/var/folders/7s/2d2lywfx3jnd80_m7t13sh240000gn/T/cabal-solver14078/cabal.config install -v --dry-run --only-dependencies --reorder-goals --max-backjumps=-1 --package-db=clear --package-db=global /Users/arya/opensource/unison/editor/ /Users/arya/opensource/unison/editor/.stack-work/downloaded/081439c887119a07c7b8da03375f90f8e424f67b2069232e43d8b31bb9bb0154.git/ /Users/arya/opensource/unison/editor/.stack-work/downloaded/625c5e9a68bfa305ab974dab307e6a8c6c85fc6be770e14fea7c8249ebb20d3e.git/ exited with ExitFailure 1
/usr/bin/gcc -dumpversion
/usr/local/bin/haddock --version
/usr/local/bin/hpc version
looking for tool hsc2hs near compiler in /usr/local/bin
found hsc2hs in /usr/local/bin/hsc2hs
/usr/local/bin/hsc2hs --version
/usr/local/bin/ghc -c /var/folders/7s/2d2lywfx3jnd80_m7t13sh240000gn/T/16807282475249.c -o /var/folders/7s/2d2lywfx3jnd80_m7t13sh240000gn/T/1622650073984943658.o
/usr/bin/ld -x -r /var/folders/7s/2d2lywfx3jnd80_m7t13sh240000gn/T/1622650073984943658.o -o /var/folders/7s/2d2lywfx3jnd80_m7t13sh240000gn/T/1144108930470211272.o
/usr/local/bin/pkg-config --version
Warning: cannot determine version of /usr/bin/strip :
""
/usr/bin/tar --help
Reading available packages...
Updating the index cache file...
Choosing modular solver.
Resolving dependencies...
cabal: Could not resolve dependencies:
trying: unison-editor-0.1 (user goal)
next goal: ghcjs-base (dependency of unison-editor-0.1)
Dependency tree exhaustively searched.

OS X, Homebrew installations of both stack and docker.

Some issues with new nix build

Hi @Ericson2314 it is possible I am doing something wrong, but I am having some issues with the new setup. First one is that node does not build:

$ cd node
$ nix-shell
...
Preprocessing test suite 'spec' for unix-time-0.3.4...
[1 of 2] Compiling UnixTimeSpec     ( test/UnixTimeSpec.hs, dist/build/spec/spec-tmp/UnixTimeSpec.dyn_o )
test/UnixTimeSpec.hs:62:48:
    Ambiguous occurrence ‘defaultTimeLocale’
    It could refer to either ‘Data.Time.defaultTimeLocale’,
                             imported from ‘Data.Time’ at test/UnixTimeSpec.hs:8:1-16
                             (and originally defined in ‘time-1.5.0.1:Data.Time.Format.Locale’)
                          or ‘System.Locale.defaultTimeLocale’,
                             imported from ‘System.Locale’ at test/UnixTimeSpec.hs:14:23-39

Other issue is I'm not sure how to actually build editor once I'm in the shell:

$ cd ..
$ cd editor
$ nix-shell -A ghcjs
building path(s) ‘/nix/store/iypm6m009l4m3mg8fy224fxab6z4vvnl-head-hash.nix’

[nix-shell:~/Dropbox/projects/unison/editor]$ cabal configure
Warning: The package list for 'hackage.haskell.org' is 37.1 days old.
Run 'cabal update' to get the latest list of available packages.
Resolving dependencies...
Configuring unison-editor-0.1...
cabal: At least the following dependencies are missing:
reflex -any, reflex-dom -any, unison-shared -any

But when I did nix-shell -A ghcjs for the first time, I saw it successfully build unison-shared, reflex, and reflex-dom. Not sure why cabal is not finding it.

Any help would be greatly appreciated!

Use fancier data structure for `Unison.Runtime.Index`

Desired properties:

  • O(1) load/save time to/from the BlockStore
  • O(1) memory usage once loaded
  • O(log N) lookups
  • O(log N) inserts, deletes

Currently, Index has can hit quadratic performance pretty easily, and load/save times are slow since it has to either serialize/deserialize the entire map or replay a bunch of inserts from the update log.

There are probably a lot of data structures that could give us these properties, though there might be some creativity in adapting them to run nicely atop BlockStore.

The search fails with 500 "Internal Server Error"

Expected behaviour: functions can be used, autocompletion works.
Actual behaviour: no functions are available, thus no autocompletion too.

Steps to reproduce:

  1. Run the server with cabal run node
  2. Run the editor with elm-reactor and opening http://localhost:8000/src/Unison/Editor.elm
  3. Literals (numbers, strings) can be entered, but there are no available functions.
  4. Open the developer console. The error manifests as follows:
POST http://localhost:8080/search 500 (Internal Server Error)
error: BadResponse 500 ("Internal Server Error")

Unison language specification/examples

Hi,

I found your project and the lamdu project which seem to go a bit in the same interesting direction. Unison seems to have a slightly bigger goal with this whole networking etc. But I am wondering, did you specify the supported language somewhere?

Basically the same issue as lamdu/lamdu#39

Daniel

Implement a sticky resource pool

Like #24, this is a fun, important, fairly self-contained project that we aren't blocked on right now, and requiring minimal background. Wanna help out with Unison development? This could be a good project!

Also like #24, this project will be an important component of the distributed systems API and reading or at least skimming that post is probably good background.

In the distributed systems API, all communication takes place over very short-lived logical connections. You open a connection to another node, send a computation to another node for evaluation, then close the connection immediately and register a callback to be invoked when a response comes back. So, at the 'logical' level, we are opening and closing a connection for each response. But at the runtime level, we'd like these connections to be sticky, and hang around even for just a couple seconds, since many times the response will come back right away, or two nodes will be talking to each other quite frequently.

This is actually a very general idea, and it can be implemented with a really generic interface:

module Unison.Runtime.ResourcePool where
-- acquire returns the resource, and the cleanup action ("finalizer") for that resource
data Pool p r = Pool { acquire :: p -> IO (r, IO ()) }

pool :: Ord p => Int -> (p -> IO r) -> (r -> IO ()) -> IO (Pool p r)
pool maxPoolSize acquire release = _todo

So, internally, pool will keep a Map p r (p for 'parameters'). When acquiring from the pool, if it already has a resource in the Map, it returns that. The finalizer it returns just adds the resource back to that Map and schedules a task to delete from that Map and actually run the finalizer after a few seconds. (This could be another parameter to pool to specify this delay period). If another resource with the same parameter p gets acquired before that happens, great! We just return the cached 'already open' resource from our Map.

A couple notes:

  • When a resource is acquired, it is temporarily removed from the Map. This is important, since we shouldn't in general assume that multiple threads can safely access an r.
  • The returned IO () finalizer action should check that the pool size does not exceed the max bound - in that case finalizer can be run immediately, so we can be assured the pool doesn't grow too large.

This library is nicely generic but it will be used by the node server to massively speed up the inter-node protocol! And it becomes especially important when the inter-node protocol requires some handshaking to establish an encrypted, forward-secret connection (like TLS or even something more lightweight like Noise pipes).

If you are interested in this project and have questions (or suggestions), please post them here, or come discuss in the chat room.

Better implementation of `BlockStore`

Current BlockStore implementation just loads everything in memory using acid-state. Come up with a better implementation that doesn't require keeping everything in memory.

Implement text-oriented interface to codebase

Implement the design discussed in this post. The design writeup is pretty detailed, no real unknowns, so I think anyone could work on this.

Since the core of this codebase manipulation tool would also be eventually used by the Unison editor, I'd stick the core logic in shared/, and deal with the particulars of command line parsing stuff in node/, which will get a new executable.

Blocked on #104

shell.sh: dependencies couldn't be built

shell.sh.log.txt
Running shell.sh on OS/X to build for the first time, eventually I fail with:

error: build of ‘/nix/store/lz7449yjaa07qn3h9z8lkqk9pgn4rlk3-bash-4.3-p33.drv’ failed /Users/danik/.nix-profile/bin/nix-shell: failed to build all dependencies [shell.sh.log.txt](https://github.com/unisonweb/platform/files/42017/shell.sh.log.txt)

I've attached the full log from my most recent attempt.

Implement data declarations

There's some design work to do regarding hashing of data declarations. @pchiusano will need to do that.

Recommendation: implement patterns as desugarings to a -> Optional b values. Need to work out details, but this requires no tweaks to typechecker or runtime.

Unblocks work on libraries, and work on GADTs or more advanced type system can happen in parallel.

Work out details: @pchiusano and/or anyone willing able to work through 'first-class patterns' paper
Once details worked out: anyone can likely modify parser

First-class patterns references:

http://lambda-the-ultimate.org/node/3096

https://reinerp.wordpress.com/category/pattern-combinators/

Finish search engine example

I currently have a bunch of outstanding work on the topic/searchengine branch. I'd like to wrap that up ASAP and get it merged before making any other changes to the codebase.

Fail to build, on OS X via Nix: blaze-markup missing QuickCheck dependency

I can build editor/ fine, but node/ fails:

$ nix-env --version
nix-env (Nix) 1.10
$ git rev-parse HEAD
b1bd162411a73c0775b254838e2751aedac747fd
$ cd node
$ cat shell.sh
#!/bin/sh
nix-shell --option extra-binary-caches https://ryantrinkle.com:5443/ -j 1



$ ./shell.sh
building path(s) ‘/nix/store/9xpdszdib9bb1n7km5fi4cb9blz8mkwj-head-hash.nix’
building path(s) ‘/nix/store/z2gsy6i2ljz68ipn3nsw45xj9n68g5wz-head-hash.nix’
these derivations will be built:
  /nix/store/0hi9pyc93z9hykhsz0fkm72zl1y1pffs-haskell-case-insensitive-1.2.0.4.drv
  /nix/store/19qjlf35rkp0hssspnwql61z5yn1il10-haskell-http-date-0.0.6.drv
  /nix/store/pyrp79jfzr905lhsi1fc95nkvi20dcvc-haskell-blaze-markup-0.6.3.0.drv
  /nix/store/1dgmrbypxxgqm3v2q1alfa9c91sxq6v4-haskell-blaze-html-0.7.1.0.drv
  /nix/store/2zzk5g1jg567fdgs5p0w7x6n5d0w71gi-haskell-iproute-1.4.0.drv
  /nix/store/4r98s9cqqbx7j4k91symsdi8m1vb7z9n-haskell-quickcheck-instances-0.3.11.drv
  /nix/store/b80bfwjcr5jkkg2bj81vwka4k9l979yy-haskell-http-types-0.8.6.drv
  /nix/store/gnz7anx3mcyjniv8pvimmrnbmay56r2q-haskell-simple-sendfile-0.2.18.drv
  /nix/store/jlzln6pb2bdnj2y8gvjkfrb6nqq1rzqn-haskell-streaming-commons-0.1.12.drv
  /nix/store/rbm4xdwfjx8zn3rzc81n4qa2cxmh3rxv-haskell-vault-0.3.0.4.drv
  /nix/store/szibnsr8r0r6rzdw8axfjasd3mkb8848-haskell-wai-3.0.2.3.drv
  /nix/store/949wilhrg2lrc1k87y2bax510yaylqp5-haskell-warp-3.0.13.drv
  /nix/store/gg91gy9xr9hnljy91p21fa774dj3l5v5-haskell-unix-time-0.3.5.drv
  /nix/store/yrq2zdfka5y2pvbbx98q8xn0wf0wpzy8-haskell-fast-logger-2.3.1.drv
  /nix/store/fpw22hba8mwil1n7d4kvqyzmz92zm4bz-haskell-wai-logger-2.2.4.drv
  /nix/store/mr0vy2n5hh8dhjqs3a7rr1jil6rfm8ap-haskell-word8-0.1.2.drv
  /nix/store/wfnlzn1afhdck28a497kgrxw540lrlbi-haskell-cookie-0.4.1.5.drv
  /nix/store/b4wrq39bs2kjg4c52kpxkyjlw3rc4z51-haskell-wai-extra-3.0.7.1.drv
  /nix/store/gcdksc6k0968qn1jsv3w5v78ircg5s5f-haskell-hspec-wai-0.6.3.drv
  /nix/store/51v0nbrgr35px6icymd309kbc8fdgmxy-haskell-scotty-0.9.1.drv
  /nix/store/5w22709zxbwkdzpbvng3zyz9qrmymdaj-haskell-unison-shared-0.1.drv
  /nix/store/pkypjfrdnf84jjqndvg2ycci9ihvxvc6-haskell-cereal-0.4.1.1.drv
  /nix/store/i43vg2qb2lxkvdzd296rs5asmczkryy7-haskell-bytes-0.15.drv
  /nix/store/mmx3678i2lhmc1sdkjlfj0wdbi24wiqc-ghc-7.10.2.drv
building path(s) ‘/nix/store/5dv104bqhm20qw54v8zjgbq2hhkhs55i-haskell-blaze-markup-0.6.3.0’
setupCompilerEnvironmentPhase
Build with /nix/store/94janv6xg8vkhg4w5qpixybrd8kim3sk-ghc-7.10.2.
unpacking sources
unpacking source archive /nix/store/32amd5inch8ybppbl96xik1i4csbx7d5-blaze-markup-0.6.3.0.tar.gz
source root is blaze-markup-0.6.3.0
patching sources
compileBuildDriverPhase
setupCompileFlags: -package-db=/private/var/folders/mj/hplb9t7x66nfvy8pscpzz73r0000gn/T/nix-build-haskell-blaze-markup-0.6.3.0.drv-0/package.conf.d -j8 -threaded
[1 of 1] Compiling Main             ( Setup.hs, /private/var/folders/mj/hplb9t7x66nfvy8pscpzz73r0000gn/T/nix-build-haskell-blaze-markup-0.6.3.0.drv-0/Main.o )
Linking Setup ...
configuring
configureFlags: --verbose --prefix=/nix/store/5dv104bqhm20qw54v8zjgbq2hhkhs55i-haskell-blaze-markup-0.6.3.0 --libdir=$prefix/lib/$compiler --libsubdir=$pkgid --with-gcc=clang --package-db=/private/var/folders/mj/hplb9t7x66nfvy8pscpzz73r0000gn/T/nix-build-haskell-blaze-markup-0.6.3.0.drv-0/package.conf.d --ghc-option=-optl=-Wl,-headerpad_max_install_names --ghc-option=-j8 --disable-split-objs --disable-library-profiling --enable-shared --enable-library-vanilla --enable-executable-dynamic --enable-tests
Configuring blaze-markup-0.6.3.0...
Setup: At least the following dependencies are missing:
QuickCheck >=2.4 && <2.8
builder for ‘/nix/store/pyrp79jfzr905lhsi1fc95nkvi20dcvc-haskell-blaze-markup-0.6.3.0.drv’ failed with exit code 1
cannot build derivation ‘/nix/store/mmx3678i2lhmc1sdkjlfj0wdbi24wiqc-ghc-7.10.2.drv’: 1 dependencies couldn't be built
error: build of ‘/nix/store/mmx3678i2lhmc1sdkjlfj0wdbi24wiqc-ghc-7.10.2.drv’ failed
/Users/tom/.nix-profile/bin/nix-shell: failed to build all dependencies

This also failed with -j 8. I modified shell.sh to run with -j 1 to see which Nix package was causing issues.

GHC 7.10.1 Missing compiler flag.

Arch Linux 64-bit Kernel 3.19.3-3-ARCH
The Glorious Glasgow Haskell Compilation System, version 7.10.1

[ 9 of 21] Compiling Unison.ABT       ( src/Unison/ABT.hs, dist/dist-sandbox-49d2760b/build/Unison/ABT.o )
src/Unison/ABT.hs:224:13:
    Non type-variable argument
      in the constraint: Integral (Data.Bytes.Signed.Unsigned n)
    (Use FlexibleContexts to permit this)
    When checking that ‘hashInt’ has the inferred type

Required adding the following before it would compile:

index 6b02f04..921c611 100644
--- a/node/unison.cabal
+++ b/node/unison.cabal
@@ -90,7 +90,7 @@ library
     transformers-compat       ,
     vector                    >= 0.10.11.0

-  ghc-options: -Wall -fno-warn-name-shadowing -threaded -rtsopts -with-rtsopts=-N
+  ghc-options: -Wall -fno-warn-name-shadowing -threaded -rtsopts -with-rtsopts=-N -XFlexibleContexts

Get vagrant box working

Here's the goal:

  • A simple vagrant up, then vagrant ssh, then cd /vagrant, should put you in the same state as if you had installed Nix successfully on your local machine and were following the other instructions.
  • Update the Vagrant section of the README.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.