Giter Club home page Giter Club logo

Comments (1)

dalssoft avatar dalssoft commented on September 25, 2024

The Problem

First, I would like to try to make the problem we are trying to solve more clear.

In my opinion, the problem has two parts: (1) essential and (2) tooling.

Essential

The essential part is basically the nature of distributed systems. Communication between services is not reliable, and we need mechanisms to work around this problem.

To be more specific, let's take a look at the following scenario:

  • A service A sends a request to a service B
  • Service B should be there, but it is down or is not responding for whatever reason

Given how reliable the current infrastructure is, developers seem to forget that this is a problem (as stated in the falacies of distributed computing [1]) and don't care to make their services resilient. For instance, it is common to see sync calls to remote resources with no retry mechanisms.

[1] https://architecturenotes.co/fallacies-of-distributed-systems/

Tooling

Part of the reason why developers don't care about reliability is because there is a big "investment" in terms of time and effort to make their services resilient.

To implement a retry mechanism today, you need, for instance, to implement a producer/consumer pattern, configure a message queue, implement a retry mechanism, etc.

This is a lot of work, and it is not trivial to do it right.

Solution

Firts we need to remember that Herbs is positioned as a microservices library, so taking into account the distributed nature of the problem is a must.

Since the essential part is given, we need to provide tooling that makes it "cheap" to implement a retry mechanism, dealing with the complexity of distributed systems.

In order to make it simple we need to provide a solution with common patterns and best practices, also intregrated with the rest of the Herbs ecosystem.

Queued Use cases

The idea is to provide a way to implement a retry mechanism for use cases.

const { usecase, queued } = require('herbsjs')
const { userQueue } = require('/src/infra/queues/userQueue.js')

queued(usecase('Create User', {

}, userQueue)

The queued function would be a wrapper that would add the retry mechanism to the use case.

When the use case is executed (uc.run()), it would send the request to the queue and return a 'queued' response. The authorization (uc.auth) would be done before sending the request to the queue.

The userQueue would be a queue configuration object. It would be used to configure the queue, the retry mechanism, etc.

When start the application (npm start), all the consumers would be started. I don't see a problem having consumers running in all the instances of the application.

The consumers would be responsible for processing the requests in the queue and call the "real" use case. Ex: /src/infra/queues/userQueue.js

Let's say the developer wants to implement a retry mechanism for an existing use case. The developer would need to wrap the use case with queued and that's it.

// no queued
const uc = usecase('Create User', {})

// queued
const uc = queued(usecase('Create User', {}), userQueue)

Queued Steps

The idea is to provide a way to implement a retry mechanism for steps.

const { usecase, step, queued } = require('herbsjs')
const { userQueue } = require('/src/infra/queues/userQueue.js')

usecase('Create User', {

    'Retrieve Info from CRM': queued(step('Create User', {

    }), userQueue)

})

Here queued would return a intance of queuedStep that would be a wrapper for the step. The queuedStep would have the same interface as a step, but would add the retry mechanism.

Basically, the same idea as the queued use cases, but with a few differences.

Executing a use case (uc.run()) with a queued step would run as a normal use case, but when the execution reaches the queued step, it would send the request to the queue and return a 'queued' response. If the use case never reaches the queued step, it would not be sent to the queue. This mixed behavior can be a problem, since the developer would need to be aware of both behaviors.

Later, the consumer would need to run something like uc.continueFrom(context) to continue the execution of the use case. The context would need to have the state of the use case execution prior to the queued step. So changes would be needed in the usecase functions to support this, since the bochu would need to be aware of intermediate states.

Backend - Producer/Consumer

Once we have the context data of a use case or step, it can be sent to a queue by a producer and processed by a consumer.

queued() is part of the Herbs library, so it would not be responsible for the producer/consumer implementation.

The glue between Herbs and the message queue would be a library that would implement the producer/consumer pattern. Ex: Herbs2Redis, Herbs2RabbitMQ, etc.

`queued()` function (use case or step) [Herbs]
    |
    |
    |
    v
 producer  [Herbs2Redis, Herbs2RabbitMQ, etc]

The same for the consumer:

 consumer  [Herbs2Redis, Herbs2RabbitMQ, etc]
    |
    |
    |
    v
call use case or step [Herbs]

Security Considerations

Since the authorization (uc.auth) would be done before sending the request to the queue the consumer could read a fake/malicious message from the queue and execute the use case or step with the wrong authorization.

Conclusion

With this proposal, we would provide a way to implement a retry mechanism for use cases and steps that tries to be as simple as possible, making it "cheap" to implement a retry mechanism.

And I would like to reinforce that this is not about decoupling, deployment, maintenance, etc. It is about reliability.

This is a very rough idea of how we could implement a retry mechanism for use cases and steps.

I'm sure there are areas that I'm not considering, so I would like to hear your thoughts.

Others Topics

pollyjs and local retry mechanisms

Pollyjs might be an improvement for the local retry mechanism, but I don't see it as a solution for the problem. Memory queues are not reliable as message queues, and the retry mechanism would be limited to the application instance.

A alternative would be to use queued with a memory queue. That would have the same effect (and limitations).

Server Indepotency

Out of the scope for this discussion. But it is important to mention that the server (the one receiving the request) should be idempotent. This means that the request should be processed only once, even if the caller / "requester" is retried multiple times.

from herbs.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.