Giter Club home page Giter Club logo

cae-framework's Introduction

cae-framework

Welcome to the CleanArchEnablers framework repository! The cae-framework is open-source and meant to make the experience of developing software with clean architecture easier. Feel free to explore the source code and its documentation.

๐Ÿ’ก The Use Cases concept

The axis of this architecture is the Use Case concept: a system is not defined by being event driven, distributed in microservices, or anything of that nature. A system is defined by its use cases, how they are gonna be provided is another story. This is the premise.

So, if a system is supposed to execute actions such as updating customer score, creating new lead and deleting inactive users, it doesn't matter if these use cases are gonna be provided as REST API Endpoints, CRON jobs, etc. All that matters is:

  • For each use case, will there be input and output? What are the contracts?
  • Once a use case gets executed, what is going to be its workflow?

This is the frame of perspective which is going to define a software, according to the clean architecture literature.

Once it is defined and implemented at the source code level, the next step is to engage in defining what is going to execute the use cases (components called primary adapters) and what is going to provide for the use cases during their executions (components called secondary adapters). It is only at that moment, after having built the use cases, that it matters whether or not they will end up being available as Kafka Topic Consumers, Spring MVC Endpoints, AWS Lambda Functions or whatever. It is only then that it matters if the database will be SQL or NoSQL, if the data will be retrieved from a REST API or directly from a database.

When the use cases are built in a well defined manner, it is possible to reuse them in any flavor.

This concept is implemented by the cae-framework. Whenever a new use case is created, it will have one of the following types:

  • FunctionUseCase
  • ConsumerUseCase
  • SupplierUseCase
  • RunnableUseCase

It will depend on the kind of contract the use case has:

  • Does it have input AND output? Then, it is a FunctionUseCase.
  • Does it have ONLY input? In this case, it is a ConsumerUseCase.
  • Does it have ONLY output? That is a SupplierUseCase.
  • Does it NOT have input NOR output? This one is a RunnableUseCase.

The illustration below might help the visualization:

image

Some examples of possible Use Cases by type are:

  • SaveNewUser: it will receive input (the payload with the new user's data to be persisted) and return some output (usually the ID of what has been created, in this case, the new user). That's a FunctionUseCase.
  • UpdateProduct: it will receive input (the payload with the product's data to be updated). Once the update is done, usually it is not necessary to return anything. That's a ConsumerUseCase.
  • RetrieveLatestCompanyCreated: it will return the newest company at the database. It doesn't need any input to get it going. So, that's a SupplierUseCase.
  • DeleteOldMessages: it will delete old messages without having to receive input, nor it has to return any output. That's a RunnableUseCase.

Every example of Use Case mentioned above can be developed to be made available as REST API Endpoints, Queue Consumers, Topic Consumers, CRON jobs. You name it. If each Use Case is its own thing, it becomes a piece of software possible to be reused in a plug-in/plug-out fashion. In this manner a Use Case is not a REST API Endpoint, but instead is dispatched by one.

Take a look at some real examples.

exem1

That is a Java project. Each use case of it is located within the {groupId}.{artifactId}.core.use_cases package.

exem2

Each use case has its own package, following a way of the Vertical Slice pattern. Inside each use case package, the same structure is used:

image

It will always be the same:

  • The Use Case contract (the class at the root level of the use case package)
  • The Use Case I/O definitions (classes within the io package)
  • The Use Case implementation (classes within the implementation package)
  • The Use Case factory (class within the factories package)

It all starts at the Use Case Contract level:

image

Here the Use Case is declared as a FunctionUseCase, which means it'll have both Input and Output contracts. The IO is defined by the {UseCaseName}Input and the {UseCaseName}Output classes.

image

At the right side of the image it is possible to observe how such contracts were defined. It determines the RetrieveCustomerUseCase will have as input:

  • ownerId (required - thus the @NotNullInputField annotation)
  • query (optional, but when present, can't be a blank string - thus the @NotBlankInputField annotation)
  • asc (required)
  • active (required)

(note that for the annotations to work as intended the input class must extend the UseCaseInput type, from the cae-framework)

And as output:

  • customers

Now, if the contract of the RetrieveCustomerUseCase changes, it is possible not to make it a breaking change, once the IO is always the same:

  • RetrieveCustomersUseCaseInput
  • RetrieveCUstomersUseCaseOutput

Its internal content might change, but the contract is final. So, if some new fields are added, it won't break the clients which consume the Use Case. If fields might be removed, it is possible to go with a deprecation policy instead of directly removing the fields, and no breaking change will be inflicted.

Once such contract is defined, it is possible to build the client code which will consume the Use Case. It would look like this:

image

Here it is a REST API Endpoint making the RetrieveCustomersUseCase available to be used. In order to get the use case to be executed, the Controller endpoint execution instantiated an object for the input (useCaseInput, line 27) and passed it at the execution method call (FunctionUseCase::execute, line 33). The execution call receives a second parameter as well, still not mentioned up to this moment. The correlation ID is a string value in UUID format. Use Case executions will always receive an instance of UseCaseExecutionCorrelation, which encapsulates the UUID string value. This is a design decision to generate logs at each Use Case execution with a unique identifier right out of the box. So every execution of any Use Case will have a log generated with a UUID identifier in it, telling if the Use Case execution was successful or not.

image

The value of the UUID is received as a parameter because if the whole lifecycle of a request that starts from the Frontend Web App is supposed to be easily traced, the same UUID would be used from there down to the backend service, so the UUID value is open to be given.

That's it about the Use Case contract. Now, what about its implementation? Take a look at the next image:

image

Extremely simple, for this Use Case. The implementation class will inherit the method applyInternalLogic, which is supposed to wrap the internal workflow logic of the Use Case. Inside this scope the code is supposed to form a visual workflow of high abstraction steps. It is meant to be easily understandable, just by taking a look at it. In this specific instance, the workflow is very simple, because it is composed by only 1 step: make the query. Once it is done, the result is returned.

Now, take a look at another Use Case implementation example:

image

It is very clear what are the steps:

  • It increases the number of transactions the new customer has
  • It activates the new customer
  • It validates the new customer
  • It stores the new customer at the persistence layer

How these steps are implemented is something one can find out entering each respective lower level.

Take a look at another example of Use Case implementation:

image

  • It stores the new company
  • It handles a new generic account for the new company
  • It handles the first enrollment for that new company
  • It handles the plan contract for the new company

If any further details are needed, just entering the lower level surely would be enough. With the whole picture being easily understandable, it is more likely to go down into the right precise chunk of code where the further details are located.

Now, how do the implementations interact with their respective dependencies? For instance, how the RetrieveCustomersUseCaseImplementation retrieved the data from the persistence layer?

๐Ÿ’ก The Adapters Concept

image

When a Use Case implementation needs to interact with something that is external to its scope, it does it by interacting with abstractions. So if what's being actually interacted with changes, there'll be no coupling. The workflow is free from the peripheral parts of the system. Such abstractions are called Ports. They will define what the Use Case is able and willing to interact with, and the other side of it will have to do what's necessary to meet such requirements. The other side is the real dependency, like a HTTP Client to call an external API, or some Repository to manipulate data from a database. Though that's the case, a HTTP Client library will not know what are the specific requirements from Use Cases of a random project that is using it. That's where the Adapter concept joins the conversation.

Adapters are meant to... adapt... what the Use Cases know how to deal with to what the real dependencies know how to deal with.

So the Ports are like slots, spaces to use plugins. The Adapters are the plugins, which will adapt the contract from the Ports to the Real Dependencies, and vice versa. The Real Dependencies are the real deal outside of the domain world: components such as Kafka Producers, Cloud Provider Clients, DAO repositories, etc.

image

Let's take a look at how it is done with the cae-framework:

image

Considering the workflow, these are the steps:

  • Map the input to an Entity object
  • Increase the new customer number of transactions
  • Activate the new customer
  • Validate the new customer
  • Store it at the persistence layer

Which of these steps seems like something out of the domain layer?

Increasing the number of transactions is something that looks like a business rule. If one guessed that, the guess would be correct. Once that is the case, such logic could be located within an Entity, and that's exactly what it is. The "addNewTransaction" (line 30) is a method from an Entity called Customer. That's why it was necessary to map the input object (SaveNewCustomerUseCaseInput) to the Entity format in the first place: to be able to use the Entity's API. Once the Customer object is created, it is possible to use its methods with its business rules.

  • Customer::addNewTransaction
  • Customer::activate
  • Customer::validate

How these methods are implemented, what their business rules are, that's within the Entity scope.

Once the business rules are applied, it is time to apply the application rule of storing what's being generated at a persistence layer.

That's the part of the workflow where external components are needed. To represent them, an abstraction is created: the StoreNewCustomerPort.

Since it is a dependency which will be injected, it is declared as a global attribute (line 14), which will be initialized at the constructor, when the Use Case implementation object is instantiated (line 22).

That's how the Port component looks like:

image

It is just an abstract class that extends a kind of Port from the cae-framework.

Just like the Use Case types, the Port types come in four:

  • FunctionPort (I/O)
  • ConsumerPort (I)
  • SupplierPort (O)
  • RunnablePort

Once a class inherits one of these types, it will come with an execution method. It will always receive the UseCaseExecutionCorrelation parameter, and depending on its base type, a generic typed input. Just like the Use Case types contract.

The implementation of this Port is the Adapter: a concrete class that extends the Port abstract class.

It usually is located at another entirely separated Java project. This one we've been using as the example is like a library. Then we create another library to keep the Adapters. The example of using a REST API Endpoint to dispatch the Use Case functionality was a different project which used the libraries with the Use Case implementation and its respective Adapters.

Let's take a look at how this is structured:

image

The axis of both projects are the domain and its use cases:

  • Customers (customer & customer-adapters)
  • Use cases about Customers (deactive customer, increment number of transactions, retrieve customer by id, ...)

image

Inside the pom.xml of the Adapters project it is possible to find the Core project as a dependency:

image

This way the Adapters project can reference the Ports it has to adapt for the real dependencies.

For our specific Use Case, that's how it took place:

image

And at the code level:

image

The Adapter class extends its respective Port. Because of that it inherits the abstract method "executeLogic", which is supposed to have its implementation keeping the logic for the adaptation. There it does what it has to do to integrate with its real dependencies:

  • CustomersTableRepository
  • CustomersPhoneTableRepository

Once every Adapter is implemented, the Use Case is enabled to be instantiated receiving them via constructor.

The way we do it is creating another Java project, called {domain-name}-assemblers.

image

The Assemblers layer is responsible for... assembling... the Use Case instances. There the developer will select which Adapter instances will be injected.

It has the Adapters layer as a dependency, which inherits the core layer as dependency as well.

Below follows an example of Assembler for a Use Case:

image

This way it is possible to adopt an immutability policy, meaning new versions of the assembled Use Cases won't necessarily override previous ones: each assembled version can be preserved, only increasing the available ones.

For example, instead of ASSEMBLED_INSTANCE (line 14) it could be V1 and whenever a new version gets created, a new static final field follows: V2, V3, VN...

That's it. Once an Assembler is built, any piece of external code can use the Use Case API from the library. This way it can be reused in any flavor of external architecture/framework:

  • Spring
  • Micronaut
  • REST API
  • Kafka Consumer
  • Queue Consumer
  • CRON Jobs
  • Functions as a Service
  • On AWS
  • On Azure
  • On Google Cloud
  • ...

The only constraint is the external piece of code being the same programming language or one that has interoperability with Java, such as Kotlin.

๐Ÿ’ก The Proxy Area and Mapped Exceptions Concepts

Every inheritor of any Use Case or Port component will have at least two main methods to be interacted with:

  • A public method for getting the execution triggered
  • A protected method for executing the internal logic of such inheritor

In the Use Case used as an example previously, there they are, respectively:

image

The Use Case is triggered to be executed at line 33, with the execute method. That's the public one. But when we take a look at the Use Case implementation, it doesn't seem to be the same method:

image

The method within the implementation is applyInternalLogic. It is not the one being called at the REST API Endpoint layer because it is only meant to be called internally, at the Use Case executor, which is fully managed by the cae-framework.

It is in this Proxy Area between the execute and the applyInternalLogic that the automatic logs are generated and the input objects are validated. Soon enough caching features will be added at this level too.

Another feature, which is currently enabled at the Proxy Area, is the exception handling. It uses a component from the cae-framework called Trier.

The Trier component does the work of a try-catch with some specifics:

image

If the exception being thrown during a Trier execution extends the type MappedException, the Trier will consider it as part of the expected flow that you've designed, so it will let it go. On the other hand, if the exception does not extend the MappedException type, the Trier will consider it a breach and catch it, passing it to the parameterized handler specified at the Trier instantiation phase.

The handler is only for in cases of unexpected exceptions being thrown during the execution. The handler must follow the functional interface of UnexpectedExceptionHandler, which is the contract of accepting an Exception instance and returning a MappedException instance.

You can use it wherever you want in your code. As mentioned, the Use Case and Port types use it, and in case something goes unexpectedly wrong during their execution, they will throw respectively:

๐Ÿ›‘ UseCaseExecutionException ๐Ÿ›‘ PortExecutionException

Both of them are types that extend MappedException.

MappedException subtypes

image

If you are developing a REST API with Springboot, for example, you could use your @ControllerAdvice to map, with the @ExceptionHandler methods, the NotFoundMappedException to return a 404 status code, the InputMappedException to return a 400 status code, and the InternalMappedException to return a 500. This way any exception extending the NotFoundMappedException would automatically return a 404, the ones extending the InputMappedException would return a 400 and the InternalMappedException ones a 500. No need to specify each type (UserNotFoundException, CreditCardNotFoundException, etc.) unless there is a good reason for it.

๐Ÿ”œ Future features

โณ Optionality for logging right out of the box

  • Currently every declared Use Case has to receive an instance of the Logger interface via constructor. It means you'll have to create a class that implements the internal Logger interface and pass its instance via each Use Case constructor you create. That's because of the feature of generating logs right out of the box just by executing Use Case instances. The logging logic is internal to the framework, but for not making client projects coupled to a specific Logger tool, we created an abstraction layer and let you choose which Logger tool will be used, the tradeoff being you having to pass it via constructor everytime.

    Though that's the current scenario, it is on the roadmap to make that feature optional, so if you don't want to pass an instance, the automatic logging won't be triggered.

    That's an example of a class implementing the Logger interface:

image

โณ Aggregated logs

  • The automatic log generations will be able to include Input and Output data into the message by parameterization. Fields that hold sensitive data will be susceptible to being marked as such, so the processor will mask them.

โณ Caching Use Case & Port executions

  • Each Use Case and their respective Ports have the Proxy Area for getting executed. It will be possible to parameterize them to cache responses. The modes will be local and remote. When remote mode is selected, an implementation of the caching interface must be provided.

โณ Documentation right out of the box

  • With the Use Case metadata mapped, it will be possible to extract automatic documentation from the source code during the build lifecycle, tipically during the CI/CD pipeline execution, for instance. It will take the Use Case IO and the commands within the applyInternalLogic to define an end-to-end documentation of what is going on. It will export it as a file within the target folder.

โณ More Input validations

  • Currently the only validations we have are the @ValidInnerPropertiesInputField, @NotNullInputField, @NotEmptyInputField and @NotBlankInputField, the last 2 being exclusive for String typed fields. The goal is to increase the options, validation lists, numbers, etc.


Want a full layer example? Check that out: some-core-layer

That's an overall didact documentation of the cae-framework. Feel free to engage! Welcome to the CleanArchEnablers environment.

cae-framework's People

Contributors

zeluciojr avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.