Giter Club home page Giter Club logo

remit's Introduction

@jpwilliams/remit

Build Status Coverage Status npm downloads per month npm version OpenTracing Badge FOSSA Status

A wrapper for RabbitMQ for communication between microservices. No service discovery needed.

npm install @jpwilliams/remit
const Remit = require('@jpwilliams/remit')
const remit = Remit({ name: 'user-service' })

remit
  .endpoint('user')
  .handler((event) => {
    return {
      name: 'Jack Williams',
      email: '[email protected]'
    }
  })
  .start()

// another service/process
const Remit = require('@jpwilliams/remit')
const remit = Remit({ name: 'api' })

const getUser = remit.request('user')
const user = await getUser(123)
console.log(user)

/* {
  name: 'Jack Williams',
  email: '[email protected]'
} */

What's remit?

A simple wrapper over RabbitMQ to provide RPC and ESB-style behaviour.

It supports request/response calls (e.g. requesting a user's profile), emitting events to the entire system (e.g. telling any services interested that a user has been created) and basic scheduling of messages (e.g. recalculating something every 5 minutes), all load balanced across grouped services and redundant; if a service dies, another will pick up the slack.

There are four types you can use with Remit.

Endpoints and listeners are grouped by "Service Name" specified as name or the environment variable REMIT_NAME when creating a Remit instance. This grouping means only a single consumer in that group will receive a message. This is used for scaling services: when creating multiple instances of a service, make sure they all have the same name.


Contents


API/Usage


request(event)

  • event <string> | <Object>

Create a new request for data from an endpoint by calling the event dictated by event. If an object is passed, event is required. See request.options for available options.

remit.request('foo.bar')

timeout and priority are explained and can be changed at any stage using request.options().

The request is sent by running the returned function (synonymous with calling .send()), passing the data you wish the make the request with.

For example, to retrieve a user from the 'user.profile' endpoint using an ID:

const getUserProfile = remit.request('user.profile')
const user = await getUserProfile(123)
console.log(user)
// prints the user's data

Returns a new request.

request.on(eventName, listener)

  • eventName <any>
  • listener <Function>

Subscribe to this request's dumb EventEmitter. For more information on the events emitted, see the Events section.

Returns a reference to the request, so that calls can be chained.

request.fallback(data)

  • data <any>

Specifies data to be returned if a request fails for any reason. Can be used to gracefully handle failing calls across multiple requests. When a fallback is set, any request that fails will instead resolve successfully with the data passed to this function.

const request = remit
  .request('user.list')
  .fallback([])

The error is still sent over the request's EventEmitter, so listening to 'error' lets you handle the error however you wish.

You can change the fallback at any point in a request's life and unset it by passing no arguments to the function.

Returns a reference to the request, so that calls can be chained.

request.options(options)

  • options <Object>
    • event <string> Required
    • timeout <integer> Default: 30000
    • priority <integer> Default: 0

Set various options for the request. Can be done at any point in a request's life but will not affect timeouts in which requests have already been sent.

const request = remit
  .request('foo.bar')
  .options({
    timeout: 5000
  })

Settings timeout to 0 will result in there being no timeout. Otherwise it is the amount of time in milliseconds to wait before declaring the request "timed out".

priority can be an integer between 0 and 10. Higher priority requests will go to the front of queues over lower priority requests.

Returns a reference to the request, so that calls can be chained.

request.ready()

Returns a promise which resolves when the request is ready to make calls.

const request = await remit
  .request('foo.bar')
  .ready()

Any calls made before this promise is resolved will be automatically queued until it is.

Returns a reference to the request, so that calls can be chained.

request.send([data[, options]])

Synonymous with request([data[, options]])

  • data <any> Default: null
  • options <Object>

Sends a request. data can be anything that plays nicely with JSON.stringify. If data is not defined, null is sent (undefined cannot be parsed into JSON).

const getUser = remit.request('user.getProfile')

// either of these perform the same action
const user = await getUser(123)
const user = await getUser.send(123)

options can contain anything provided in request.options, but the options provided will only apply to that single request.

Returns a promise that resolves with data if the request was successful or rejects with an error if not. Always resolves if a fallback is set.


endpoint(event[, ...handlers])

  • event <string> | <Object>
  • ...handlers <Function>

Creates an endpoint that replies to requests.

event is the code requests will use to call the endpoint. If an object is passed, event is required. For available options, see endpoint.options.

const endpoint = await remit
  .endpoint('foo.bar', console.log)
  .start()

start() must be called on an endpoint to "boot it up" ready to receive requests. An endpoint that's started without a handler (a function or series of functions that returns data to send back to a request) will throw. You can set handlers here or using endpoint.handler. To learn more about handlers, check the Handlers section.

Returns a new endpoint.

endpoint.handler(...handlers)

  • ...handlers <Function>

Set the handler(s) for this endpoint. Only one series of handlers can be active at a time, though the active handlers can be changed using this call at any time.

const endpoint = remit.endpoint('foo.bar')
endpoint.handler(logRequest, sendFoo)
endpoint.start()

For more information on handlers, see the Handlers section.

Returns a reference to the endpoint, so that calls can be chained.

endpoint.on(eventName, listener)

  • eventName <any>
  • listener <Function>

Subscribe to this endpoint's dumb EventEmitter. For more information on the events emitted, see the Events section.

Returns a reference to the endpoint, so that calls can be chained.

endpoint.options(options)

endpoint.start()

endpoint.pause([cold])

  • cold <Boolean>

Pauses consumption of messages for this endpoint. By default, any messages currently in memory will be processed (a "warm" pause). If cold is provided as truthy, any messages in memory will be pushed back to RabbitMQ.

Has no effect if the endpoint is already paused or has not yet been started.

Returns a promise that resolves with the endpoint when consumption has been successfully paused.

endpoint.resume()

Resumes consumption of messages for this endpoint after being paused using pause(). If run on an endpoint that is not yet started, the endpoint will attempt to start.

Returns a promise that resolves with the endpoint when consumption has been successfully resumed.


emit.on(eventName, listener)

  • eventName <any>
  • listener <Function>

Subscribe to this emitter's dumb EventEmitter. For more information on the events emitted, see the Events section.

Returns a reference to the emit, so that calls can be chained.

emit.options(options)

emit.ready()

emit.send([data[, options]])


listen.handler(...handlers)

listen.on(eventName, listener)

  • eventName <any>
  • listener <Function>

Subscribe to this listener's dumb EventEmitter. For more information on the events emitted, see the Events section.

Returns a reference to the listen, so that calls can be chained.

listen.options(options)

listen.start()

listen.pause([cold])

  • cold <Boolean>

Pauses consumption of messages for this listener. By default, any messages currently in memory will be processed (a "warm" pause). If cold is provided as truthy, any messages in memory will be pushed back to RabbitMQ.

Has no effect if the listener is already paused or has not yet been started.

Returns a promise that resolves with the listener when consumption has been successfully paused.

listen.resume()

Resumes consumption of messages for this listener after being paused using pause(). If run on a listener that is not yet started, the listener will attempt to start.

Returns a promise that resolves with the listener when consumption has been successfully resumed.


Events

request, endpoint, emit and listen all export EventEmitters that emit events about their incoming/outgoing messages.

All of the events can be listened to by using the .on() function, providing an eventName and a listener function, like so:

const request = remit.request('foo.bar')
const endpoint = remit.endpoint('foo.bar')
const emit = remit.emit('foo.bar')
const listen = remit.listen('foo.bar')

request.on('...', ...)
endpoint.on('...', ...)
emit.on('...', ...)
listen.on('...', ...)

Events can also be listened to globally, by adding a listener directly to the type. This listener will receive events for all instances of that type. This makes it easier to introduce centralised logging to remit's services.

remit.request.on('...', ...)
remit.endpoint.on('...', ...)
remit.emit.on('...', ...)
remit.listen.on('...', ...)

The following events can be listened to:

Event Description Returns request endpoint emit listen
data Data was received Raw data
error An error occured or was passed back from an endpoint Error
sent Data was sent The event that was sent
success The action was successful The successful result/data
timeout The request timed out A timeout object

Handlers

Endpoints and listeners use handlers to reply to or, uh, handle incoming messages. In both cases, these are functions or values that can be passed when creating the listener or added/changed real-time by using the .handler() method.

If a handler is a value (i.e. not a function) then it will be returned as the data of a successful response. This is useful for simple endpoints that just need to return static or just simple mutable values.

All handler functions are passed two items: event and callback. If callback is mapped, you will need to call it to indicate success/failure (see Handling completion below). If you do not map a callback, you can reply synchronously or by returning a Promise.

Handlers are used for determining when a message has been successfully dealt with. Internally, Remit uses this to ascertain when to draw more messages in from the queue and, in the case of listeners, when to remove the message from the server.

RabbitMQ gives an at-least-once delivery guarantee, meaning that, ideally, listeners are idempotent. If a service dies before it has successfully returned from a handler, all messages it was processing will be passed back to the server and distributed to another service (or the same service once it reboots).

Simple returns

Here, we create a simple endpoint that returns {"foo": "bar"} whenever called:

const endpoint = await remit
  .endpoint('foo.bar', () => {
    return {foo: 'bar'}
  })
  .start()

Incoming data

We can also parse incoming data and gather information on the request by using the given event object.

const endpoint = await remit
  .endpoint('foo.bar', (event) => {
    console.log(event)
  })
  .start()

Event object

When called, the above will log out the event it's been passed. Here's an example of an event object:

{
  started: <Date>, // time the message was taken from the server
  eventId: <UID>, // a unique ID for the message (useful for idempotency purposes)
  eventType: 'foo.bar', // the eventName used to call this endpoint/listener (useful when using wildcard listeners)
  resource: 'service-user', // the name of the service that called/emitted this
  data: {userId: 123}, // the data sent with the request
  timestamp: <Date>, // the time the message was created

  // extraneous information, currently containing tracing data
  metadata: {
    originId: <UID>, // the ID of the initial request or emission that started the entire chain of calls - every call in a chain will have the same ID here
    bubbleId: <UID or NULL>, // the "bubble" (see more below) that the action happened in
    fromBubbleId: <UID or NULL>, // the "bubble" (see more below) that the action was triggered from
    instanceId: <UID>, // a unique ID for every action
    flowType: <STRING or missing> // either `'entry'` to show it's an entrypoint to a bubble, `'exit'` to show it's an exit from a bubble or blank to show it is neither
  }
}

Handling completion

Handlers provide you with four different ways of showing completion: Promises, callbacks, a synchronous call or a straight value. To decide what the handler should treat as a successful result, remit follows the following pattern:

if handler is not a function:
├── Return handler value
else if handler does not map second (callback) property:
├── if handler returns a promise:
│   └── Watch resolution/rejection of result
│   else:
│   └── Return synchronous result
else:
└── Wait for callback to be called

In any case, if an exception is thrown or an error is passed as the first value to the callback, then the error is passed back to the requester (if an endpoint) or the message sent to a dead-letter queue (if a listener).

Middleware

You can provide multiple handlers in a sequence to act as middleware, similar to that of Express's. Every handler in the line is passed the same event object, so to pass data between the handlers, mutate that.

A common use case for middleware is validation. Here, a middleware handler adds a property to incoming data before continuing:

const endpoint = await remit
  .endpoint('foo.bar')
  .handler((event) => {
    event.foo = 'bar'
  }, (event) => {
    console.log(event)
    // event will contain `foo: 'bar'`

    return true
  })
  .start()

When using middleware, it's important to know how to break the chain if you need to. If anything other than undefined is returned in any handler (middleware or otherwise via a Promise/callback/sync call), the chain will break and that data will be returned to the requester.

If an exception is thrown at any point, the chain will also break and the error will be returned to the requester.

This means you can fall out of chains early. Let's say we want to fake an empty response for user #21:

const endpoint = await remit
  .endpoint('foo.bar')
  .handler((event) => {
    if (event.data.userId === 21) {
      return []
    }
  }, (event) => {
    return calculateUserList()
  })
  .start()

Or perhaps exit if a call is done with no authorisation token:

const endpoint = await remit
  .endpoint('foo.bar')
  .handler(async (event) => {
    if (!event.data.authToken) {
      throw new Error('No authorisation token given')
    }

    event.data.decodedToken = await decodeAuthToken(event.data.authToken)
  }, (event) => {
    return performGuardedCall()
  })

Tracing

See remitrace.

License

FOSSA Status

remit's People

Contributors

davidlondono avatar dependabot-preview[bot] avatar dependabot[bot] avatar fossabot avatar greenkeeper[bot] avatar jacktuck avatar jpwilliams avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

remit's Issues

Reconnect with different credentials

Hi,

i'd like to know if it's possible to reconnect or disconnect the current remit instance.

Our plan is to connect to the amqp via some default credentials, call a register route and the fetch the unique amqp credentials for the microservice. After the registration the service has to reconnect with the new credentials.

Add optional callback to remit requests

When sending a request, add the option to provide a callback to .send() which will be triggered with err, data when the request has been resolved.

Do we want this? It's nice being able to use callbacks and will help converting old Remit services to the new version, but should we be forcing the use of Promises or not?

Demission set-up

For demission set-ups, use a DLX that fires back to the original exchange when the messages time out.

That way, instead of having clunky handling where messages are unacked for X amount of time, we can delay them entirely until they're fully ready by sending the messages to the DLX with a timeout and just waiting until they're fired back!

For demissions, then, we may as well use an entirely different set-up compared to requests, as there's a tonne more boilerplate they'll need.

Fantastic idea though!

Support priorities

With 3.5.0+, RabbitMQ supports priorities!

Our primary use case for this would be separating API requests from automated ones.
API requests always have priority.

This shouldn't even be too complicated! Woo!

Add the ability to proxy requests

Using queue binding, we could "proxy" requests from one queue to another.

For example, you may wish to phase out a service in lieu of another and so proxy requests to the old service endpoints to new ones during the transition phase.

We could achieve this by binding the old queue name (as a routing key) to the new queue.
We may also have to remove the binding from the old queue?

Needs some playing around.

Endpoint time-outs are never set for requests

It used to be that when a request is sent, an expiration is set on the sent message to ensure that if it times out it's dropped from the queue.

Looks like that expiration's not set any more. Boo!

heartbeat timeout cause crash

This would seem to have happened on remit 2 but not certain because it's happened on a now dead docker container.

events.js:182
      throw er; // Unhandled 'error' event
      ^

Error: Heartbeat timeout
    at Heart.<anonymous> (/app/node_modules/amqplib/lib/connection.js:425:19)
    at emitNone (events.js:105:13)
    at Heart.emit (events.js:207:7)
    at Heart.runHeartbeat (/app/node_modules/amqplib/lib/heartbeat.js:88:17)
    at ontimeout (timers.js:469:11)
    at tryOnTimeout (timers.js:304:5)
    at Timer.listOnTimeout (timers.js:264:5)

but this error was not caught and caused a crash

Add Travis CI for 2.0.0

Should be a simple Travis file. We need RabbitMQ and to be able to connect to it and we're sorted.

Do we need to create new messages/emitters for each call to `request`?

In initial plans for this iteration it was intended that, upon calling the send method (synonymous with calling the returned function from request()), a new request would be created with a brand new emitter.

The purpose of this was to ensure that only messages intended for that specific request would be handled by the handlers given, rather than being handled by other global items.

Do we need this at all?

If it is needed, if we can ideally write a test (or example that can be formatted into a test later) that confirms it's working, that'd be great.

RabbitMQ cancelling a consumer throws an error

If the consumer is cancelled by RabbitMQ, the message callback will be invoked with null.

Currently, if the consumer is cancelled, we don't handle it and end up throwing an error:

TypeError: Cannot read property 'properties' of null
    at self._consume_channel.consume (/root/apps/service-pusher/node_modules/remit/index.js:93:37)
    at Channel.BaseChannel.dispatchMessage (/root/apps/service-pusher/node_modules/amqplib/lib/channel.js:466:12)
    at Channel.BaseChannel.handleCancel (/root/apps/service-pusher/node_modules/amqplib/lib/channel.js:479:15)
    at emitOne (events.js:96:13)
    at Channel.emit (events.js:188:7)
    at Channel.C.accept (/root/apps/service-pusher/node_modules/amqplib/lib/channel.js:393:17)
    at Connection.mainAccept [as accept] (/root/apps/service-pusher/node_modules/amqplib/lib/connection.js:63:33)
    at Socket.go (/root/apps/service-pusher/node_modules/amqplib/lib/connection.js:476:48)
    at emitNone (events.js:86:13)
    at Socket.emit (events.js:185:7)
    at emitReadable_ (_stream_readable.js:432:10)
    at emitReadable (_stream_readable.js:426:7)
    at readableAddChunk (_stream_readable.js:187:13)
    at Socket.Readable.push (_stream_readable.js:134:10)
    at TCP.onread (net.js:548:20)

Should handle this by handling the error and then throwing a nicer error from Remit itself.

An in-range update of amqplib is breaking the build 🚨

The dependency amqplib was updated from 0.5.2 to 0.5.3.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

amqplib is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build could not complete due to an error (Details).

Commits

The new version differs by 38 commits.

There are 38 commits in total.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Add 'fallbacks' to requests

When making a request (or a request template) it'd be nice to be able to provide a 'fallback' value that was passed back if an error was returned from the request (or it timed out).

The error would still be emitted via the error emission (so we still have access to that) but the fallback value would be the one that's given, ensuring that requests are guaranteed to succeed in a way that users choose and still delivering the error if it's needed.

Something like this, perhaps?

// as a separate option, so it's nice and declarative?
const req = remit
  .request('foo.bar')
  .fallback('foobar')

// or as an option?
const req = remit
  .request('foo.bar')
  .options({fallback: 'foobar'})

I'd say both formats might be nice, but as @jacktuck said, it'd be nice to start minimising the API surface and focusing on some set methods rather than providing users with a hundred different ways to perform one task.

Detect if a request cannot be routed and return immediately

If a request cannot be routed to a queue, it'd be nice to know as soon as possible.
We can use the mandatory message option to have the message returned if that's the case, meaning we can throw an error back to the user that no such endpoint exists.

This does, however, come with a performance hit; it may be pertinent to expose this functionality as an optional extra.

Mark an endpoint as "transient" so that it deletes itself after an amount of inactivity

Coupled with #43, this would mean that services which are down for "prolonged" periods of time would have their queues removed, resulting in requesters getting immediate feedback that the endpoint they're trying to hit is not available.

There are a few issues with this currently:

  1. Utilising this functionality would, under the current design, mean that the endpoint would have to be transient and the requester would have to have the mandatory setting. That's a lot of assumptions/configuration.
  2. When do we expire the queue? We could do it on a default timeout of 30 seconds, but request timeouts are changeable; should endpoint timeouts be?
  3. A service being down and requests still being queued is a nice feature. Rather than this feature being an additional help, it removes one feature in place of another. Is that good?

Add reconnection logic

A client service running remit will currently crash upon losing connection to the RabbitMQ server. This is expected (#70) but highlights the lack of the ability to have a client reconnect to the server without restarting the entire process.

There are a few open-source wrappers for this (amqplib-auto-recovery, amqplib-retry, amqp-connection-manager) but they need investigating to see whether they're viable, how much logic would have to change as a result of using them, if it's pertinent to manage our own solution etc.

Use local endpoints for requests if they are available

We could have an option to enable/disable this that defaults to be on (allowing local endpoint checking).

If a service dies and has to reboot, the data it requested can no longer be routed back to it, so there's no point persisting those messages needlessly. Therefore, if a local endpoint can be found, use it.

This can only be performed for requests though; emissions should still be queued all the time.

Unify `delay` and `schedule` options in emissions

Currently users can either set delay (an integer) or schedule (a date).

Seeing as these options are mutually exclusive (with schedule being prioritised), it seems pertinent to just switch to delay accepting either an integer or a date and adjusting the behaviour appropriately.

This really isn't worth a major version bump, so keep schedule in there as an alias for delay and remove it from the docs.

distributed-promise as a feature here

It's occured to me that the technique used at @jpwilliams/distributed-promise might also be a very relevant feature here.

If it were to be added, the result would never be cached, as that's not a good idea within RabbitMQ. We could still make use of the distributed promises though.

So how might it work? I see two possibilities right now - either a new API or add it to request's functionality.

// endpoint
remit.endpoint('user.get')
	.handler(() => ({ id: 1, name: 'Jack' }))
	.start()
// service A
const getUser = remit.request('user.get')
const user = await getUser()
// ...
// service B
const getUser = remit.request('user.get')
const user = await getUser()
// finds call by service A with same args currently in-flight,
// so doesn't send another request, but waits for that response.

Ignoring implementation details, would we have to ensure that all arguments are the same here? As in, not just the input going to the endpoint, but also things like the local request's timeout? It might also be pertinent to either have to opt in or out of the functionality via RequestOptions.

Really interesting little idea.

Add noAck usage as a separate res type

Some endpoints (the primary example being a push notification example from @jacktuck) require the ability to ack messages as they come in, meaning failed deliveries would never retry accidentally.

This can be achieved using noAck when consuming:

noAck (boolean): if true, the broker won't expect an acknowledgement of messages delivered to this consumer; i.e., it will dequeue messages as soon as they've been sent down the wire. Defaults to false (i.e., you will be expected to acknowledge messages).*

One option would be to add the ability to send consumption options into .res(), though our noAck example would require some extra details anyway, as:

It's an error to supply a message that either doesn't require acknowledgement, or has already been acknowledged. Doing so will errorise the channel. *

Our best bet would probably be to add another endpoint type like .ares() to cater for this.

* source: amqplib | Channel API reference

Multiple data listeners on endpoints are confusing

Currently, endpoints of any sort can have multiple data listeners registered. All of these listeners can return Promises or run the callback provided, so there's a huge potential for races happening where the first past the post is the one to reply.

The proposition here is to instead have the current data listeners alongside an endpoint handler. The handler would be a single entity responsible for responding to incoming data. Once this handler resolves, returns it's callback or synchronously returns if no callback is mapped and no Promise is returned, the message will be acked and the response sent back.

This will stand entirely separate from the data listeners, which will be "fire-and-forget" listeners to incoming events. No callback. No returns.

Handling returning success in middleware

Currently, middleware only exits early if an error is thrown. What if I want to return early success?

We could adjust middleware so that:

  • If error is thrown, break chain and send error back
  • If something other than undefined is returned, break chain and send result back
  • If undefined is returned, carry on

Endpoints should not be durable

Ack Latency for Persistent Messages

basic.ack for a persistent message routed to a durable queue will be sent after persisting the message to disk. The RabbitMQ message store persists messages to disk in batches after an interval (a few hundred milliseconds) to minimise the number of fsync(2) calls, or when a queue is idle. This means that under a constant load, latency for basic.ack can reach a few hundred milliseconds. To improve throughput, applications are strongly advised to process acknowledgements asynchronously (as a stream) or publish batches of messages and wait for outstanding confirms. The exact API for this varies between client libraries.

https://www.rabbitmq.com/confirms.html

Allow handlers to be values

It'd be cool if we added the ability to return values directly in handlers for super-simple things.

remit
  .endpoint('get versions')
  .handler({
    first: 'foo',
    second: 'bar'
  })
  .start()

// or strings or what-not
remit
  .endpoint('get string')
  .handler('here you go')
  .start()

Worth it? Or a pointless feature?

Add "latches" to help with idempotency issues for complex insertions

When performing complex insertions using a single message in situations where everything must succeed otherwise the entire operation should fail, it's really difficult to achieve idempotency.

For example, a call to the endpoint team.create might also create the first user of a team. If creating the team succeeds but creating the user fails, the team should be deleted. This makes a listener pattern illogical. The route to create this via code can be a bit mad, but might be more achievable using "latches".

Using pseudo-code:

remit.res('team.create', function createTeam (args, done) {
  // create a latch that, upon being either released through code
  // with an error or because of the service dying, will send a 
  // message with the given routing key
  const latch = remit.latch('team.create.revert', args)

  // perform our tasks
  const teamErr = createTeam(args.name)
  const userErr = createUser(args.username)

  // release the latch
  // if a truthy error is given, send the message
  latch.release(teamErr || userErr)

  return done(teamErr || userErr)
})

// here we handle that latch being fired by deleting the team and user being created
remit.res('team.create.revert', function revertCreateTeam (args, done) {
  deleteTeam(args.name)
  deleteUser(args.username)

  return done()
})

One thing to discuss is whether we need an addData method on the returned latch to continue adding items. I currently see this as a bad idea, as it gives the illusion of safety where there is none.

For example:

remit.res('...', function doSomething (args, done) {
  const latch = remit.latch('undo.something')

  doIt((err, insertedId) => {
    // if the service died after running `latch.addData`, we'd be fine
    // and would be able to properly revert the change.
    // however, if we failed _before_ running `latch.addData` but _after_
    // successfully inserting it, we'd most definitely be unable to remove it.

    // it seems like a bad idea, as it lures developers into thinking they're being
    // safe when there's still a risk
    latch.addData({insertedId: insertedId})
  })
})

With that in mind, it'd be most sensible to always send the arguments the original function was given to the latch, meaning we should probably announce that a latch is present somewhere in the endpoint instantiation rather than within the callback. There is a case for only creating a latch in certain situations, such as an endpoint that creates an entity may be configured to return an existing entity with the same name if it's found and only create if it is not, meaning we'd only want a latch if there wasn't another item found.

There's also currently a bug with the current design whereby:

  1. Service receives team.create message
  2. Service creates team.create.revert latch
  3. Service dies whilst creating team
  4. team.create message and team.create.revert message are now both queued, meaning it might create-and-delete it instantaneously.

Even if we ack the original message as soon as a latch is set, we then run in to:

  1. Service receives team.create message
  2. Service creates team.create.revert latch
  3. Service dies before checking if team already exists
  4. team.create.revert message is sent to delete the team that already exists. Bummer.

Lots of bugs about it, but some interesting cases where it might be useful if we work out some kinks.

Add "pipelines": define multiple "stages" to have data flow through, automatically

Pipelines are useful and we should be able to add them to Remit without too much trouble.

  • What should happen if a pipeline stage fails?
  • Is the initial message "in progress" until the pipeline is complete?

Let's start padding this out to see if it's a possibility, starting with ascertaining how pipelines should work within the system.

Allow connections to AMQPS URLs

Connecting to an amqps://... url throws an error regarding certificates. Turns out we need to set servername in the connection options so it knows what to authenticate against.

Compose.io has a good example. Parasnipping:

const rabbitUrl = 'amqps://username:[email protected]:5672/vh'
const { hostname } = require('url').parse(rabbitUrl)

amqp.connect(rabbitUrl, {servername: hostname})

Worth doing this by default for all connections?
Might also want to allow passing any connection options and overwriting with any custom options given at the end.

Provide a default QoS

We need to give a default QoS of ~128 or something and add the option to change it in both 1.x.x and 2.x.x.

This really damages some prod systems if there are more messages than a service can handle resource-wise.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.