Giter Club home page Giter Club logo

sartography / spiff-arena Goto Github PK

View Code? Open in Web Editor NEW
53.0 9.0 38.0 52.71 MB

SpiffWorkflow is a software development platform for building, running, and monitoring executable diagrams

Home Page: https://www.spiffworkflow.org/

License: GNU Lesser General Public License v2.1

Dockerfile 0.30% Python 66.41% Shell 3.23% JavaScript 1.38% CSS 1.06% HTML 0.13% Mako 0.02% TypeScript 27.09% SCSS 0.02% FreeMarker 0.24% Makefile 0.13%
bpmn spiffworkflow bpm business-process flowchart orchestration-framework process-engine python workflow workflow-engine

spiff-arena's Introduction

spiff-arena

SpiffArena is a low(ish)-code software development platform for building, running, and monitoring executable diagrams. It is intended to support Citizen Developers and to enhance their ability to contribute to the software development process. Using tools that look a lot like flow-charts and spreadsheets, it is possible to capture complex rules in a way that everyone in your organization can see, understand, and directly execute.

Please visit the SpiffWorkflow website for a Getting Started Guide to see how to use SpiffArena and try it out. There are also additional articles, videos, and tutorials about SpiffArena and its components, including SpiffWorkflow, Service Connectors, and BPMN.js extensions.

Backend Setup, local

Remember, if you don't need a full-on native dev experience, you can run with docker (see below), which saves you from all the native setup. If you have issues with the local dev setup, please consult the troubleshooting guide.

There are three prerequisites for non-docker local development:

  1. python - asdf-vm works well for installing this.
  2. poetry - pip install poetry works
  3. mysql - the app also supports postgres. and sqlite, if you are talking local dev).

When these are installed, you are ready for:

    cd spiffworkflow-backend
    poetry install
    ./bin/recreate_db clean
    ./bin/run_server_locally

Mac Port Errors: On a Mac, port 7000 (used by the backend) might be hijacked by Airplay. For those who upgraded to macOS 12.1 and are running everything locally, your AirPlay receiver may have started on Port 7000 and your server (which uses port 7000 by default) may fail due to this port already being used. You can disable this port in System Preferences > Sharing > AirPlay receiver.

Poetry Install Errors: If you encounter errors with the Poetry install, please note that MySQL and PostgreSQL may require certain packages exist on your system prior to installing these libraries. Please see the PyPi mysqlclient instructions and the pre-requisites for the Postgres psycopq2 adapter Following the instructions here carefully will assure your OS has the right dependencies installed. Correct these, and rerun the above commands.

Using PyCharm? If you would like to run or debug your project within an editor like PyCharm please see These directions for PyCharm Setup.

Keycloak Setup

You will want an openid server of some sort for authentication. There is one built in to the app that is used in the docker compose setup for simplicity, but this is not to be used in production, and non-compose defaults use a separate keycloak container by default. You can start it like this:

./keycloak/bin/start_keycloak

It'll be running on port 7002 If you want to log in to the keycloak admin console, it can be found at http://localhost:7002, and the creds are admin/admin (also logs you in to the app if running the frontend)

Frontend Setup, local

First install nodejs (also installable via asdf-vm), ideally the version in .tool-versions (but likely other versions will work). Then:

cd spiffworkflow-frontend
npm install
npm start

Assuming you're running Keycloak as indicated above, you can log in with admin/admin.

Run tests

./bin/run_pyl

Run cypress automated browser tests

Get the app running so you can access the frontend at http://localhost:7001 in your browser by following the frontend and backend setup steps above, and then:

./bin/run_cypress_tests_locally

Docker

For full instructions, see Running SpiffWorkflow Locally with Docker.

The docker-compose.yml file is for running a full-fledged instance of spiff-arena while editor.docker-compose.yml provides BPMN graphical editor capability to libraries and projects that depend on SpiffWorkflow but have no built-in BPMN edit capabilities.

Using Docker for Local Development

If you have docker and docker compose, as an alternative to locally installing the required dependencies, you can leverage the development docker containers and Makefile while working locally. To use, clone the repo and run make. This will build the required images, install all dependencies, start the servers and run the linting and tests. Once complete you can open the app and code changes will be reflected while running.

After the containers are set up, you can run make start-dev and make stop-dev to start and stop the servers. If the frontend or backend lock file changes, make dev-env will recreate the containers with the new dependencies.

Please refer to the Makefile as the source of truth, but for a summary of the available make targets:

Target Action
dev-env Builds the images, sets up the backend db and installs npm and poetry dependencies
start-dev Starts the frontend and backend servers, also stops them first if they were already running
stop-dev Stops the frontend and backend servers
be-tests-par Runs the backend unit tests in parallel
fe-lint-fix Runs npm lint:fix in the frontend container
run-pyl Runs all frontend and backend lints, backend unit tests

Contributing

To start understanding the system, you might:

  1. Explore the demo site via the Getting Started Guide
  2. Clone this repo, cd docs, run ./bin/build, and open your browser to http://127.0.0.1:8000 to view (and ideally edit!) the docs
  3. Check out our GitHub issues, find something you like, and ask for help on discord

Monorepo

This is a monorepo based on git subtrees that pulls together various spiffworkflow-related projects. FYI, some scripts:

ls bin | grep subtree

License

SpiffArena's main components are published under the terms of the GNU Lesser General Public License (LGPL) Version 3.

Support

You can find us on our Discord Channel. Commercial support for SpiffWorkflow is available from Sartography. Please contact us via the schedule a demo link on the SpiffWorkflow website to discuss your needs.

spiff-arena's People

Contributors

burnettk avatar calexh-sar avatar chrda81 avatar cullerton avatar danfunk avatar dependabot[bot] avatar essweine avatar fzzylogic avatar jakubgs avatar jasquat avatar jbirddog avatar kayvon-martinez avatar kokhoor avatar madhurrya avatar phillana26 avatar pixeebot[bot] avatar tcoz avatar theaubmov avatar usama9500 avatar violet4 avatar widnyana avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spiff-arena's Issues

Script task : Can't import module

I've tried to create a script task that uses the requests module to get online information.

When I try to run the process, I get the following error:

ImportError:Import not allowed: requests. . ImportError:Import not allowed: requests.

Add support for a KKV data store

Due to the nature of reading/writing from data stores it would be advantageous to have a KKV data store which deferred the actual read/write until a top level key is specified. This could be achieved with low effort if functions were returned from the data store read/write operations. Another option is to add extensions to the arrows from data stores to/from tasks which would specify the top level key.

Design CRUD support for data stores

Need to decide how we are going to support CRUD operations across multiple data store types, ideally in some common interface. One approach could be an interface beyond that exposed by SpiffWorkflow that allows each data store to describe how to add/update/delete etc records with its realm. Once this is designed out separate tickets will likely be required for the implementation for the various data stores.

Process Instance metadata backfill

Right now if you change the metadata extractions, existing instances are unaffected. It might be nice if there was a big red button to backfill metadata for a process model. Maybe it would start a job in the background. Definitely don't fail if the task data doesn't include the metadata.

not for this ticket, but just as a reminder, there is also a related issue where metadata extraction paths are not remembered like other things when you create a process instance, but instead the newest ones are always used. thoughts on a potential db schema cache to fix at https://github.com/sartography/spiff-arena/pull/1648/files#diff-388074671d2d3fff982213227431b6f3970b1385a2458300382fd3da9f4e69a1R915

Simple blank task Go button does not work

When executing a diagram with a simple blank task, the user is required to click on a Go button (like a manual task).

But clicking on this Go button redirect to the "Something went wrong" page.
And the diagram execution is stuck on this simple task node...

This Go button should work as expected and pass the task, or the task should be executed without user manual action (but executing pre-post script).

Threads not working on MI tasks ?

This diagram contains a Multi-Instance task, then a Multi-instance Sub-process:

Capture du 2023-09-14 17-29-33

Spiff-Arena seems to not use parallelism in the first MI task, with the latest backend docker image (Created at 2023-09-12T02:00:33.701215495Z)

List of events:

Id Bpmn process Task name Task identifier Task type Event type User Timestamp
544 Fmpeg chunked   End SimpleBpmnTask task_completed [email protected] 2023-09-15 14:38:14
543 Fmpeg chunked   ffmpeg_chunked.EndJoin _EndJoin task_completed [email protected] 2023-09-15 14:38:14
542 Fmpeg chunked End end_process EndEvent task_completed [email protected] 2023-09-15 14:38:14
541 Fmpeg chunked   split_subprocess ParallelMultiInstanceTask task_completed [email protected] 2023-09-15 14:38:14
540 Fmpeg chunked Split Subprocess split_subprocess [child] SubWorkflowTask task_completed [email protected] 2023-09-15 14:38:14
539 Fmpeg chunked Split Subprocess split_subprocess [child] SubWorkflowTask task_completed [email protected] 2023-09-15 14:38:14
538 Fmpeg chunked Split Subprocess split_subprocess [child] SubWorkflowTask task_completed [email protected] 2023-09-15 14:38:14
537 Split Subprocess   End SimpleBpmnTask task_completed [email protected] 2023-09-15 14:38:14
536 Split Subprocess   End SimpleBpmnTask task_completed [email protected] 2023-09-15 14:38:14
535 Split Subprocess   End SimpleBpmnTask task_completed [email protected] 2023-09-15 14:38:14
534 Split Subprocess   split_subprocess.EndJoin _EndJoin task_completed [email protected] 2023-09-15 14:38:14
533 Split Subprocess   split_subprocess.EndJoin _EndJoin task_completed [email protected] 2023-09-15 14:38:14
532 Split Subprocess   split_subprocess.EndJoin _EndJoin task_completed [email protected] 2023-09-15 14:38:14
531 Split Subprocess End subprocess end_subprocess EndEvent task_completed [email protected] 2023-09-15 14:38:14
530 Split Subprocess End subprocess end_subprocess EndEvent task_completed [email protected] 2023-09-15 14:38:14
529 Split Subprocess End subprocess end_subprocess EndEvent task_completed [email protected] 2023-09-15 14:38:14
528 Split Subprocess FFmpeg video + audio muxer ffmpeg_muxer ServiceTask task_completed [email protected] 2023-09-15 14:38:14
527 Split Subprocess FFmpeg video + audio muxer ffmpeg_muxer ServiceTask task_completed [email protected] 2023-09-15 14:38:11
526 Split Subprocess FFmpeg video + audio muxer ffmpeg_muxer ServiceTask task_completed [email protected] 2023-09-15 14:38:07
525 Split Subprocess   Gateway_1oloqe8 ParallelGateway task_completed [email protected] 2023-09-15 14:38:04
524 Split Subprocess   Gateway_1oloqe8 ParallelGateway task_completed [email protected] 2023-09-15 14:38:04
523 Split Subprocess   Gateway_1oloqe8 ParallelGateway task_completed [email protected] 2023-09-15 14:38:04
522 Split Subprocess Concatenate Chunks concatenate_chunks ServiceTask task_completed [email protected] 2023-09-15 14:38:04
521 Split Subprocess Concatenate Chunks concatenate_chunks ServiceTask task_completed [email protected] 2023-09-15 14:38:01
520 Split Subprocess Concatenate Chunks concatenate_chunks ServiceTask task_completed [email protected] 2023-09-15 14:37:58
519 Split Subprocess Get Chunk List get_chunk_list ServiceTask task_completed [email protected] 2023-09-15 14:37:54
518 Split Subprocess Get Chunk List get_chunk_list ServiceTask task_completed [email protected] 2023-09-15 14:37:54
517 Split Subprocess Get Chunk List get_chunk_list ServiceTask task_completed [email protected] 2023-09-15 14:37:54
516 Split Subprocess Fmpeg audio encode ffmpeg_audio_encode ServiceTask task_completed [email protected] 2023-09-15 14:37:54
515 Split Subprocess FFmpeg Split Chunks ffmpeg_split_chunks ServiceTask task_completed [email protected] 2023-09-15 14:37:51
514 Split Subprocess FFmpeg Split Chunks ffmpeg_split_chunks ServiceTask task_completed [email protected] 2023-09-15 14:37:48
513 Split Subprocess Fmpeg audio encode ffmpeg_audio_encode ServiceTask task_completed [email protected] 2023-09-15 14:37:44
512 Split Subprocess Fmpeg audio encode ffmpeg_audio_encode ServiceTask task_completed [email protected] 2023-09-15 14:37:41
511 Split Subprocess FFmpeg Split Chunks ffmpeg_split_chunks ServiceTask task_completed [email protected] 2023-09-15 14:37:37
510 Split Subprocess   Start BpmnStartTask task_completed [email protected] 2023-09-15 14:37:34
509 Split Subprocess Start subprocess start_subprocess StartEvent task_completed [email protected] 2023-09-15 14:37:34
508 Split Subprocess Start subprocess start_subprocess StartEvent task_completed [email protected] 2023-09-15 14:37:34
507 Split Subprocess   Start BpmnStartTask task_completed [email protected] 2023-09-15 14:37:34
506 Split Subprocess Start subprocess start_subprocess StartEvent task_completed [email protected] 2023-09-15 14:37:33
505 Split Subprocess   Start BpmnStartTask task_completed [email protected] 2023-09-15 14:37:33
504 Fmpeg chunked   ffmpeg_create ParallelMultiInstanceTask task_completed [email protected] 2023-09-15 14:37:33
503 Fmpeg chunked FFmpeg agent create videos ffmpeg_create [child] ServiceTask task_completed [email protected] 2023-09-15 14:37:33
502 Fmpeg chunked FFmpeg agent create videos ffmpeg_create [child] ServiceTask task_completed [email protected] 2023-09-15 14:37:29
501 Fmpeg chunked FFmpeg agent create videos ffmpeg_create [child] ServiceTask task_completed [email protected] 2023-09-15 14:37:25
500 Fmpeg chunked Start start_process StartEvent task_completed [email protected] 2023-09-15 14:37:20
499 Fmpeg chunked   Start BpmnStartTask task_completed [email protected] 2023-09-15 14:37:2

Requests send to connector-proxy to access MADAM services:

First Multi instance task. The arena engine sends requests to the service with 4s delay between each task instance:
No parallel execution here!

172.20.0.2 - - [15/Sep/2023 14:37:25] "POST /v1/do/madam_mam/FFMpeg HTTP/1.1" 200 -
172.20.0.2 - - [15/Sep/2023 14:37:29] "POST /v1/do/madam_mam/FFMpeg HTTP/1.1" 200 -
172.20.0.2 - - [15/Sep/2023 14:37:33] "POST /v1/do/madam_mam/FFMpeg HTTP/1.1" 200 -

other tasks...
172.20.0.2 - - [15/Sep/2023 14:37:37] "POST /v1/do/madam_mam/FFMpeg HTTP/1.1" 200 -
172.20.0.2 - - [15/Sep/2023 14:37:41] "POST /v1/do/madam_mam/FFMpeg HTTP/1.1" 200 -
172.20.0.2 - - [15/Sep/2023 14:37:44] "POST /v1/do/madam_mam/FFMpeg HTTP/1.1" 200 -
172.20.0.2 - - [15/Sep/2023 14:37:48] "POST /v1/do/madam_mam/FFMpeg HTTP/1.1" 200 -
172.20.0.2 - - [15/Sep/2023 14:37:51] "POST /v1/do/madam_mam/FFMpeg HTTP/1.1" 200 -
172.20.0.2 - - [15/Sep/2023 14:37:54] "POST /v1/do/madam_mam/FFMpeg HTTP/1.1" 200 -
172.20.0.2 - - [15/Sep/2023 14:37:54] "POST /v1/do/madam_mam/Scanfolder HTTP/1.1" 200 -
172.20.0.2 - - [15/Sep/2023 14:37:54] "POST /v1/do/madam_mam/Scanfolder HTTP/1.1" 200 -
172.20.0.2 - - [15/Sep/2023 14:37:54] "POST /v1/do/madam_mam/Scanfolder HTTP/1.1" 200 -
172.20.0.2 - - [15/Sep/2023 14:37:58] "POST /v1/do/madam_mam/Concatenate HTTP/1.1" 200 -
172.20.0.2 - - [15/Sep/2023 14:38:01] "POST /v1/do/madam_mam/Concatenate HTTP/1.1" 200 -
172.20.0.2 - - [15/Sep/2023 14:38:04] "POST /v1/do/madam_mam/Concatenate HTTP/1.1" 200 -
172.20.0.2 - - [15/Sep/2023 14:38:07] "POST /v1/do/madam_mam/FFMpeg HTTP/1.1" 200 -
172.20.0.2 - - [15/Sep/2023 14:38:11] "POST /v1/do/madam_mam/FFMpeg HTTP/1.1" 200 -
172.20.0.2 - - [15/Sep/2023 14:38:14] "POST /v1/do/madam_mam/FFMpeg HTTP/1.1" 200 -

First MI task has a 4s delay between each instance:

504 Fmpeg chunked   ffmpeg_create ParallelMultiInstanceTask task_completed [email protected] 2023-09-15 14:37:33
503 Fmpeg chunked FFmpeg agent create videos ffmpeg_create [child] ServiceTask task_completed [email protected] 2023-09-15 14:37:33
502 Fmpeg chunked FFmpeg agent create videos ffmpeg_create [child] ServiceTask task_completed [email protected] 2023-09-15 14:37:29
501 Fmpeg chunked FFmpeg agent create videos ffmpeg_create [child] ServiceTask task_completed [email protected] 2023-09-15 14:37:25

The MI sub-process seems good and execute instance in parallel, but with 3s interval (may be delayed by the first MI task...).

Enumeration process updates

Update the existing enumeration process model to use data stores and so it can be maintained by process architects.

Consider an introspection based api as an alternative to data stores

Mainly for discussion as we do #538 - depending on the use cases it could be feasible that an introspection based api can achieve the same end result in a safer/saner manner. If process b could query process a's task data/state (assuming permissions were good) then there is no need for process a to write to a mutable data store just so process b can read it. If process a and b both are running at the same time and need to share data back and forth as they run this might not work, but for the use case of process b just needs data from process a's end state, this could provide a powerful alternative.

Missing documentation on process creation

When creating a new process there are two undocumented fields, or at least, I couldn't find anything related to them.

One is "notification addresses" and the other "metadata extractions".

Allow downloading a process model, and then re-uploading it

Currently we require that all process ids are unique across the whole system. This is required because call activities need to be able to unique reference another process in the system.

I suggest that we modify the process id when you are uploading the file, and just do this transparently, or perhaps with an alert message to let folks know that the process id was modified.

Improved Messages: API Authentication Keys

Allow users to create API keys which they can use to make calls directly to the SpiffWorkflow backend API without requiring a JSON web token.

  • Modify the database to allow users to create and name one or more API keys which they can use to connect, these API keys should have some kind of TTL associated with them, and should be marked as invalid afterwards.
  • Update the UI to allow users to create and view api keys
  • Update the User.verify_token to allow authentication with an API key in addition to a JSON web token.

Error Boundary Event Development Work for Service Tasks

There is some development work we need to complete for Error Boundary Events to behave consistently for Service Tasks. We spoke in detail about this today and I wanted to capture some of the thoughts:

  1. We need to standardize all connectors to error out on failure. Currently Service calls to Connector Proxies do not raise errors - the HTTP Connector never errors out - if the call fails, the information will be in the response variable, and would need to be parsed out in a post-script or script task. Probably return an exception name or error name with each response. BambooUserNotFound, XeroAccessDenied, S3BucketNotFound, and that sort of thing.
  2. Errors should be codified and reported as a part of a service signature. A call to an external service may fail in many ways, some of which may be recoverable, some may not. For example, a call to Bamboo HR might error out with a “404 - User not found” which may not be recoverable, vs a “500 - Internal Error” which may mean we should just wait for 30 minutes and try again. Every service is different, with different errors that a BPMN architect has no way of knowing. So we suggest that the Connector Proxy be able to report which errors are possible when making a service call, so you can catch each one and handle it as needed.
  3. Catch all errors in the backend, and allow the BPMN workflow to handle them if it chooses. When executing any “engine” task (Service Task, Script Task, etc…) BEFORE you raise the error, check to see if there is an error boundary event, and if so, execute that instead. This will allow BPMN authors the chance to catch errors, but will allow them to display in the pink error box if they are not caught. Query all active service tasks and add these to the xml. This will cause them to appear as options when configuring error boundary events. Talk to elizabeth about this.
  4. Allow for raising errors. It should be possible to raise an error, causing the workflow to properly fail and log the issue. This would allow the BPMN Author to force a process to fail out rather than just complete.
  5. Allow for generic exception catching. It should be possible to catch ANY task execution error - so you can put an error event on a task with no specific error code assigned (or perhaps you select ‘all’ as an option), and if a failure happens during the execution of that task it will definitely follow the path.

Missing Element and Collection fields in Parallel multi-instance task

Arena is installed and running following this documentation :
https://www.spiffworkflow.org/posts/articles/get_started_docker/

Playing with spiff-arena, I am editing Parallel Multi-instance tasks.

But the task tab is missing Input/Output Element/Collection fields.

Capture du 2023-07-20 12-20-37

But they should appear like in the documentation here :
https://spiffworkflow.readthedocs.io/en/latest/_images/multiinstance_task_configuration1.png

Thanks for this great project.

When a Diagram fails, node states are wrong and different from event states

After an error on this diagram, the diagram is displayed. When clicking on node to see the content data, I can not because most node are in FUTURE state or WAITING state. (I have written states under each node in this image).

Capture du 2023-09-14 13-49-08

Fortunately, the list of events is good and reflect the real state of the nodes:

Capture du 2023-09-14 13-41-49

and the task failed link display a meaningful error message. But I can not debug by consulting the data content of completed nodes in the diagram, cause there are in a false FUTURE state...

System to customize homepage ecosystem

Create a system to customize the homepage for a given deployment of spiff.

We could extend the extensions functionality to allow for rendering components that the homepage currently renders and remove the status column from all tables for a given deployment.

Initial example json:

page: /tasks/completed {
    api: /tasks/blah,
    components: [{
      header: "Started by me",
      type: "ProcessInstanceShow",
      "columns": "id,process,task,waiting_for,started,last_updated,last_milestone,action"
    }, {
      header: "Waiting by me",
      type: "ProcessInstanceShow",
      columns: "default"
    }, {
      type: markdown
      existing_extensions_stuff_could_go_here
    }]
  }

The system should also support doing similar customizations on the process model show page, even if this is not fully implemented right now.

https://docs.google.com/document/d/1lJsmlCLtXgAjXZAaiIlBTe5QO1IV2lIgSxv-qTARFI0/edit?usp=sharing

docker-compose for M* macs

Do you have a version of docker-compose which will work for M1/M2 macs?

I get these errors:

(spiff) ➜ spiffworkflow docker-compose up -d
[+] Running 7/7
✔ Network spiffworkflow_default Created 0.0s
✔ Container spiffworkflow-connector Started 0.3s
✔ Container spiffworkflow-backend Healthy 10.8s
! spiffworkflow-backend The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested 0.0s
! spiffworkflow-connector The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested 0.0s
✔ Container spiffworkflow-frontend Started 10.9s
! spiffworkflow-frontend The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested 0.0s

SQLite3 concurrency error when Arena is open in two tabs in an web browser

When Arena is open in two tabs in a web browser and I do the edit and save on one, and
then start the diagram on the other :

I got an error message in the diagram tab when going back to it after a start.

OperationalError (sqlite3.OperationalError) database is locked
[SQL: UPDATE active_user SET last_seen_in_seconds=? WHERE active_user.id = ?]
[parameters: (1694010821, 11)]
(Background on this error at: https://sqlalche.me/e/20/e3q8)

It seems there is a concurrency problem in the backend with sqlite3.

I had the same problem in my own project and didn't manage to use the concurrency support of Sqlite3 (it depends on Sqlite3 version and python binding). So I solved it with a worker in a thread sending SQL requests one at a time to Sqlite3, from a stack.

Docker Image CVEs

In reviewing the backend docker containers for SpiffArena, there are many CVEs. These accumulate in time on the base Docker image layers.

Free scan via ECR:

  • Set the ECR repository to scan-on-push
docker pull ghcr.io/sartography/spiffworkflow-backend:latest
docker tag ghcr.io/sartography/spiffworkflow-backend:latest {account}.dkr.ecr.{region}.amazonaws.com/{ecr_repo}:swb_latest
docker push {account}.dkr.ecr.{region}.amazonaws.com/{ecr_repo}:swb_latest
Screenshot 2023-10-13 at 10 40 46 AM

This will block our use of Spiff and in the spirit of transparency, I want to make maintainers aware.

Fix the data store modeling experience

Currently you can drop a data store on to a diagram, and draw the arrows, but you need to go into the raw xml to properly wire things up. Ideally everything should be configurable from property panels.

Backend crashing when running docker compose on postgres

It looks like a DB compatibility error. I am running RDS Aurora Postgres 15.1

Error:

2023-10-08 17:34:33 sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedFunction) operator does not exist: json ~~ unknown
2023-10-08 17:34:33 LINE 3: WHERE task.properties_json LIKE '%last_state_change": null%'
2023-10-08 17:34:33                                    ^
2023-10-08 17:34:33 HINT:  No operator matches the given name and argument types. You might need to add explicit type casts.
2023-10-08 17:34:33 
2023-10-08 17:34:33 [SQL: SELECT task.id AS task_id, task.guid AS task_guid, task.bpmn_process_id AS task_bpmn_process_id, task.process_instance_id AS task_process_instance_id, task.task_definition_id AS task_task_definition_id, task.state AS task_state, task.properties_json AS task_properties_json, task.json_data_hash AS task_json_data_hash, task.python_env_data_hash AS task_python_env_data_hash, task.runtime_info AS task_runtime_info, task.start_in_seconds AS task_start_in_seconds, task.end_in_seconds AS task_end_in_seconds 
2023-10-08 17:34:33 FROM task 
2023-10-08 17:34:33 WHERE task.properties_json LIKE %(properties_json_1)s]
2023-10-08 17:34:33 [parameters: {'properties_json_1': '%last_state_change": null%'}]
2023-10-08 17:34:33 (Background on this error at: https://sqlalche.me/e/20/f405)
2023-10-08 17:34:34 Exited with BAD EXIT CODE '1' in ./bin/boot_server_in_docker script at line: 96.

Thanks in advance.

Task on parallel branch not executed

The following diagram fails on "FFmpeg video + audio muxer" task requiring the audio_destination variable set in the previous task "Fmpeg audio encode".

Error Details
Event Error Details
Error evaluating expression 'f"-y -i {video_destination} -i {audio_destination} -map 0:0 -map 1:0 -f mp4 {destination}"', name 'audio_destination' is not defined. Did you mean one of '['video_destination', 'destination', 'destinations']'?

Task Name:FFmpeg video + audio muxer
Task ID:ffmpeg_muxer
Call Activity Trace:Split Subprocess (ffmpeg_fixed_chunks.bpmn) -> FFmpeg video + audio muxer (ffmpeg_fixed_chunks.bpmn)

Two bugs here:

  • A task on a parallel branch is not executed (so audio_destination variable is not set).
  • The states of tasks in the diagram are not set correctly as they are in the event list.

In this diagram, the task "Fmpeg audio encode" is not executed before the final task "FFmpeg video + audio muxer":

Capture du 2023-09-14 13-49-08

It is quite hard to debug reading the diagram because the tasks in the diagram are not in their final state at the time of the error.

The list of events is quite explicit about it:

Capture du 2023-09-14 13-41-49

Add support for adding files to process groups

The new "upsearch" capabilities are enabled in the backend but are not easily used due to the fact that files cannot be added to process groups. If a json data store, for instance, was to be shared across process models this functionality would be required. Another use case is shared call activities or dmn files. There is a branch where the UI portions of this work are largely done, but the backend APIs and frontend navigation calls need to be updated to support a /files for process groups.

Improved Messages: UI For Managing Messages

Rather than define Correlation Keys, Correlation Properties, and Message Names within the BPMN.io editor - and having to replicate these exact settings in each process model that needs to send or receive a message, provide a way to define these things with the Messages Tab on the top level UI, similar to Data Stores.

Frontend Tasks

  • Messages UI - add to the Messages Tab the ability to list and edit Message Types. A Message Type is made up of a name and an optional description. It should be possible to view all process models that send or receive a message with this name.
  • Messages UI - Messages can, optionally, have one or more correlation properties. A correlation property is a name and a retrieval expression. The retrieval expression will be executed against the body of a sending message to extract a value.
  • Message API - pull list of messages from the backend and present them to the BPMN IO Editor.

Backend Tasks

  • Message API - Crud on messages and correlation properties

BPMN IO Changes

  • Remove the message and correlation properties from the Collaboration Properties Panel
  • Add ability to get a list of messages and their correlation properties from an external source.

Service Task response containing "items" key cause TypeError exception on next task

I created a Spiff Connector to a service returning a dict with the key "items" in it. The corresponding value of the key is a list.

This key cause a TypeError on the next talk in the process in the diagram. Here in the code:

https://github.com/sartography/spiff-arena/blob/8c7061b0402be1bc4b23e9d820c58b85356413c2/SpiffWorkflow/SpiffWorkflow/bpmn/serializer/helpers/registry.py#L49C1-L49C55

task.data is treated as an object (called obj in the code) and when obj.items() is called Python try to call the list value as a function.

Here is the full exception of the event on the next task:

Stacktrace:
TypeError: 'list' object is not callable
                                 ^^^^^^^^^^^
    items = [ (k, v) for k, v in obj.items() ]
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/bpmn/serializer/helpers/registry.py", line 49, in clean
    self.clean(obj)
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/bpmn/serializer/helpers/registry.py", line 42, in convert
                    ^^^^^^^^^^^^^^^
    return dict((k, self.convert(v)) for k, v in obj.items())
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/bpmn/serializer/helpers/dictionary.py", line 97, in <genexpr>
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    return dict((k, self.convert(v)) for k, v in obj.items())
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/bpmn/serializer/helpers/dictionary.py", line 97, in convert
           ^^^^^^^^^^^^^^^^^^^^
    return super().convert(obj)
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/bpmn/serializer/helpers/registry.py", line 43, in convert
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    'data': self.data_converter.convert(task.data),
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/bpmn/serializer/workflow.py", line 233, in task_to_dict
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    new_properties_json = self.serializer.task_to_dict(spiff_task)
  File "/app/src/spiffworkflow_backend/services/task_service.py", line 265, in update_task_model
    self.update_task_model(task_model, spiff_task)
  File "/app/src/spiffworkflow_backend/services/task_service.py", line 199, in update_task_model_with_spiff_task
    self.task_service.update_task_model_with_spiff_task(waiting_spiff_task)
  File "/app/src/spiffworkflow_backend/services/workflow_execution_service.py", line 198, in after_engine_steps
    self.after_engine_steps(bpmn_process_instance)
  File "/app/src/spiffworkflow_backend/services/workflow_execution_service.py", line 234, in on_exception
    self.delegate.on_exception(bpmn_process_instance)
  File "/app/src/spiffworkflow_backend/services/workflow_execution_service.py", line 91, in on_exception
    self.execution_strategy.on_exception(self.bpmn_process_instance)
  File "/app/src/spiffworkflow_backend/services/workflow_execution_service.py", line 422, in run_and_save
    execution_service.run_and_save(exit_at, save)
  File "/app/src/spiffworkflow_backend/services/process_instance_processor.py", line 1418, in _do_engine_steps
    self._do_engine_steps(exit_at, save, execution_strategy_name, execution_strategy)
  File "/app/src/spiffworkflow_backend/services/process_instance_processor.py", line 1383, in do_engine_steps
    yield
  File "/app/src/spiffworkflow_backend/services/process_instance_queue_service.py", line 98, in dequeued
Traceback (most recent call last):

During handling of the above exception, another exception occurred:

SpiffWorkflow.bpmn.exceptions.WorkflowTaskException: TypeError:'list' object is not callable. 
    raise wte
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/bpmn/PythonScriptEngine.py", line 78, in execute
    super().execute(task, script, methods)
  File "/app/src/spiffworkflow_backend/services/process_instance_processor.py", line 362, in execute
    raise e
  File "/app/src/spiffworkflow_backend/services/process_instance_processor.py", line 365, in execute
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    return task.workflow.script_engine.execute(task, self.script)
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/bpmn/specs/mixins/script_task.py", line 46, in _execute
           ^^^^^^^^^^^^^^^^^^^
    return self._execute(task)
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/bpmn/specs/mixins/script_task.py", line 31, in _run_hook
             ^^^^^^^^^^^^^^^^^^^^^^^
    result = self._run_hook(my_task)
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/specs/base.py", line 305, in _run
    raise exc
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/specs/base.py", line 316, in _run
             ^^^^^^^^^^^^^^^^^^^^^^^^^
    retval = self.task_spec._run(self)
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/task.py", line 589, in run
    task.run()
  File "/app/src/spiffworkflow_backend/services/workflow_execution_service.py", line 318, in spiff_run
    self.execution_strategy.spiff_run(self.bpmn_process_instance, exit_at)
  File "/app/src/spiffworkflow_backend/services/workflow_execution_service.py", line 408, in run_and_save
Traceback (most recent call last):

During handling of the above exception, another exception occurred:

TypeError: 'list' object is not callable
                ^^^^^^^^^^^
    for k, v in arg.items():
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/bpmn/PythonScriptEngineEnvironment.py", line 94, in __init__
              ^^^^^^
    self[k] = Box(v)
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/bpmn/PythonScriptEngineEnvironment.py", line 96, in __init__
           ^^^^^^^^^
    return Box(data)
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/bpmn/PythonScriptEngineEnvironment.py", line 149, in convert_to_box
    Box.convert_to_box(context)
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/bpmn/PythonScriptEngineEnvironment.py", line 158, in _prepare_context
    self._prepare_context(context)
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/bpmn/PythonScriptEngineEnvironment.py", line 47, in execute
    super().execute(script, context, external_methods)
  File "/app/src/spiffworkflow_backend/services/process_instance_processor.py", line 135, in execute
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    return self.environment.execute(script, context, external_methods)
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/bpmn/PythonScriptEngine.py", line 118, in _execute
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    return self._execute(script, task.data, external_methods or {})
  File "/app/venv/lib/python3.11/site-packages/SpiffWorkflow/bpmn/PythonScriptEngine.py", line 75, in execute
Traceback (most recent call last):

sample-process-models

Hello,

I've been trying to set up the spiffworkflow-backend project locally, following the instructions provided in the repository. However, I've encountered an issue related to the sample-process-models directory. The script ./bin/run_server_locally seems to expect this directory to be present, but I couldn't find it in the repository.

Could you please provide guidance on where I can find the sample-process-models directory or if there are any additional steps I need to follow to set up the project correctly? Your assistance would be greatly appreciated.

Thank you in advance for your help.

Best regards,
Sofiane


Creating new users credential

Hi,

I am trying out the multi user approval from this article: https://www.spiffworkflow.org/posts/deep_dives/approval/

May I know how do I create the required user (and their password) for the purpose of testing?

Thanks.

Regards,
Kok Hoor

Logout failure when already logged out

If one has already logged out from Spiff, but has another tab open and attempts to log out they see this:

image

Which is a result of id_token_hint being set to null:

image

Which happens here:

def logout(self, id_token: str, redirect_url: str | None = None) -> Response:
if redirect_url is None:
redirect_url = f"{self.get_backend_url()}/v1.0/logout_return"
request_url = (
self.open_id_endpoint_for_name("end_session_endpoint")
+ f"?post_logout_redirect_uri={redirect_url}&"
+ f"id_token_hint={id_token}"
)
return redirect(request_url)

I think this could be pretty easily avoided by checking if id_token is None, and if it is simply redirect to the main page.

Timer events don't seem to work.

I've tried to create a timer event with the example definitions given by the spiff-arena interface as well as the camunda documentation: R5/PT10S, 0 0/5 * * * ? and '*/5 * * * *.

Neither of these seem to work, I'm getting the errors:
Error evaluating expression 'R5/PT10S', name 'R5' is not defined. . Error evaluating expression 'R5/PT10S', name 'R5' is not defined.

'0 0/5 * * * ?' : Error evaluating expression '0 0/5 * * * ?', invalid syntax (<string>, line 1). . Error evaluating expression '0 0/5 * * * ?', invalid syntax (<string>, line 1)

'*/5 * * * *' : .Error evaluating expression '*/5 * * * *', invalid syntax (<string>, line 1). . Error evaluating expression '*/5 * * * *', invalid syntax (<string>, line 1).

Docker-compose configuration for traefik reverse proxy and authentik identity provider over https

Hi,

I try to get the spiff-arena working on a server with your provided docker-compose.yml. When I open the URL http://<server_name>.<domain_name>.com:8001 the following URL redirection is shown in my browsers address bar: https://api.<server_name>.<domain_name>.com/v1.0/login?redirect_url=http%3A%2F%2F<server_name>.<domain_name>.com%3A8001%2F which cannot be resolved.

Do I have to create the DNS entry for api.<server_name>.<domain_name>.com, or can I rewrite the URL, so that the spiff-arena is working on that server?

It would be great if you could help me to configure the compose file for traefik reverse proxy and authentik as identity provider, to get the spiff-arena working on my domain with https. I created a traefik-crowdsec-stack with authentik based on this tutorial page https://goneuland.de/traefik-v2-3-reverse-proxy-mit-crowdsec-im-stack-einrichten/ and this stack is working really great with other docker services.

I'm grateful for every little hint

API container fails to start properly

The spiff.app-test host API container refuses to respond to requests:

 > curl --silent --show-error "http://localhost:9000/v1.0/status"   
curl: (56) Recv failure: Connection reset by peer

At startup the API container gets stuck on this line for about 2 minutes:

Reinitialized existing Git repository in /app/process_models/.git/

Then shows this error:

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/logging/__init__.py", line 1110, in emit
    msg = self.format(record)
          ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/logging/__init__.py", line 953, in format
    return fmt.format(record)
           ^^^^^^^^^^^^^^^^^^
  File "/app/src/spiffworkflow_backend/services/logging_service.py", line 66, in format
    record.message = record.getMessage()
                     ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/logging/__init__.py", line 377, in getMessage
    msg = msg % self.args
          ~~~~^~~~~~~~~~~
TypeError: not all arguments converted during string formatting
Call stack:
  File "/app/./bin/data_migrations/run_all.py", line 84, in <module>
    main()
  File "/app/./bin/data_migrations/run_all.py", line 75, in main
    run_version_2(process_instances)
  File "/app/./bin/data_migrations/run_all.py", line 21, in st_func
    r = func(*args, **kwargs)
  File "/app/./bin/data_migrations/run_all.py", line 56, in run_version_2
    Version2.run(process_instances)
  File "/app/src/spiffworkflow_backend/data_migrations/version_2.py", line 13, in run
    current_app.logger.info("process_instance_count: ", len(process_instances))
  File "/usr/local/lib/python3.11/logging/__init__.py", line 1489, in info
    self._log(INFO, msg, args, **kwargs)
  File "/usr/local/lib/python3.11/logging/__init__.py", line 1634, in _log
    self.handle(record)
  File "/usr/local/lib/python3.11/logging/__init__.py", line 1644, in handle
    self.callHandlers(record)
  File "/app/venv/lib/python3.11/site-packages/sentry_sdk/integrations/logging.py", line 96, in sentry_patched_callhandlers
    return old_callhandlers(self, record)
Message: 'process_instance_count: '
Arguments: (986,)

And then the logs are full of Failed to migrate process_instance warnings:

{"level": "WARNING", "message": "Failed to migrate process_instance '11238'. The error was 'NoneType' object has no attribute '_child_added_notify'", "loggerName": "spiffworkflow_backend", "processName": "MainProcess", "processID": 21, "threadName": "MainThread", "threadID": 139653673092928, "timestamp": "2023-10-09T10:16:28.104Z"}
{"level": "WARNING", "message": "Failed to migrate process_instance '11870'. The error was 'NoneType' object has no attribute '_child_added_notify'", "loggerName": "spiffworkflow_backend", "processName": "MainProcess", "processID": 21, "threadName": "MainThread", "threadID": 139653673092928, "timestamp": "2023-10-09T10:16:29.001Z"}
{"level": "WARNING", "message": "Failed to migrate process_instance '11956'. The error was 'NoneType' object has no attribute '_child_added_notify'", "loggerName": "spiffworkflow_backend", "processName": "MainProcess", "processID": 21, "threadName": "MainThread", "threadID": 139653673092928, "timestamp": "2023-10-09T10:16:29.641Z"}
{"level": "WARNING", "message": "Failed to migrate process_instance '12428'. The error was 'NoneType' object has no attribute '_child_added_notify'", "loggerName": "spiffworkflow_backend", "processName": "MainProcess", "processID": 21, "threadName": "MainThread", "threadID": 139653673092928, "timestamp": "2023-10-09T10:16:30.071Z"}
{"level": "WARNING", "message": "Failed to migrate process_instance '12666'. The error was 'NoneType' object has no attribute '_child_added_notify'", "loggerName": "spiffworkflow_backend", "processName": "MainProcess", "processID": 21, "threadName": "MainThread", "threadID": 139653673092928, "timestamp": "2023-10-09T10:16:30.617Z"}
{"level": "WARNING", "message": "Failed to migrate process_instance '12667'. The error was 'NoneType' object has no attribute '_child_added_notify'", "loggerName": "spiffworkflow_backend", "processName": "MainProcess", "processID": 21, "threadName": "MainThread", "threadID": 139653673092928, "timestamp": "2023-10-09T10:16:31.036Z"}
{"level": "WARNING", "message": "Failed to migrate process_instance '12668'. The error was 'NoneType' object has no attribute '_child_added_notify'", "loggerName": "spiffworkflow_backend", "processName": "MainProcess", "processID": 21, "threadName": "MainThread", "threadID": 139653673092928, "timestamp": "2023-10-09T10:16:31.439Z"}

A lot of them.

Frontend doesn't always load completely

Just after I run docker-compose up, I can open 127.0.0.1:8001 and log in into the frontend.

When fully loaded, the menu has 5 buttons:

  • Home
  • Processes
  • Process Instance
  • Messages
  • Configuration

If you're "home", you'll see three tabs and sections in each of them:
In progress: Tasks for my open instances, Tasks waiting for me, Tasks waiting for group: admin
Completed: My completed instances, With tasks completed by me, With tasks completed by group: admin
Start New: "Processes I can start"

Not fully loaded

However, after some time has passed, if I try to open the frontend again, instead of seeing everything I see an incomplete menu:

  • Home
  • Processes
  • Process Instances

And the tabs "In Progress", "Completed", "Start New +" have no content for them.

Possible problem?

Maybe I'm being logged out after a certain period? But if I am, I wasn't forwarded to a login page, or informed about not being logged in.

Reproducing

I haven't found how to reproduce this yet. I'm guessing it happens when the backend fails so the frontend can't load the information anymore.
I'll try to update this issue if it happens again.

Web Forms - Workflow engine doesn't find task data for dropdown list field

Today I wanted to try the Web Forms and found this great youtube introduction (https://www.youtube.com/watch?v=1IaiaquQ0y0) from Dan. I loaded the example for the dropdown list, save everything (3 files were created) and ran the workflow. But sadly this error is happening.

image

I look in the file exampledata.json and see the correct json data:
image

Side note: Is the styling via ui:options supported? I tried the example from the referenced documentation, but it wasn't rendered:

  "firstName": {
    "ui:options": {
      "chakra": {
        "p": "1rem",
        "color": "blue.200",
        "sx": {
          "margin": "0 auto"
        }
      }
    }
  }

Document react-jsonschema-form enhancements

We have enhanced, and will continue to enhance the react-jsonschema-form capabilities. We need to document these enhancements and explain how they can be used when building forms in SpiffArena. Enhancements include:

  1. Dynamic enumerations
  2. Checkbox validation
  3. Regex validation
  4. Date range selector
  5. Date validation when compared to another date

Workflows with "Start Timer Event" are not re-scheduled after completion if they contain a Manual Task object

I tried to use the internal scheduler to run workflows in defined intervals. The simple workflow named Cycle Test with a "normal" task is re-scheduled as expected until its termination. But when I use a Manual Task object in the workflow named Simple Example, it runs only once and is not started with a new interval, when the Manual Task was completed.

Please see the discussion in the discord "general" channel.

image
image

Dependency downloads have conflicting versions, missing dependencies, or deprecated dependencies.

Recently I've been trying to use spiff-arena with spiffworkflow build, the whole operation is carried out on ubuntu, where my nodejs and npm versions are v18.17.1 and 10.1.0 respectively, and did not choose to docker with the use of. Here the problem is mainly in the spiffworkflow-frontend run installation, in the execution of the installation of npm install some problems about the dependencies, first of all, there are some dependencies version conflict problem, according to the prompts to modify the re-execution of the npm install, and then the following screenshot suggests that many dependencies have been abandoned and need to be replaced and so on. Then the following screenshot shows that many of the dependencies have been deprecated and need to be replaced, and so on.
be7f70b807a37d2a767c95a84a96347

Then I tried npm start based on the above warning, and the src directory and the files in it were missing. This directory is supposed to be automatically downloaded to the appropriate directory during npm install, but I tried several times and found that it wasn't there, so I found the src directory under @microsoft on github and downloaded it locally and copied it over, then tried npm start again. Then I downloaded it and copied it locally, and tried npm start again, and it still couldn't find the file under src.
98b3524a2fce89c5ac0602b8cb3412c

I've tried many times during the whole execution process, and most of the problems are dependency conflicts, missing and deprecated, so I'd like to ask what should I do next?

URL doesn't seem to be displayed properly in manual task.

My environment

Thanks for this great low-code platform.
I am using spiff-arena in docker on windows platform, and my browser is Chrome 113.
I install spiff-arena following this article.
I think I found a bug that URL is not displayed properly in a manual task.

Here is an example to reproduce the bug.

Fig. 1
Fig.1: a simple workflow

Fig. 2
Fig.2: the configuration of the manual task.

Note “Pre-Script” and "Instructions". I paste them here:
“Pre-Script”: d = {"a_url": "http://example.com/a.json"}
"Instructions": {{d}}

Fig. 3
Fig.3: What I see when executing this BPMN.

The related HTML is <p>"{'a_url': '"<a href="http://example.com/a.json&amp;#39;%7D">http://example.com/a.json&amp;#39;}</a></p>
What I am expecting is a clean string without the HTML code, and that the URL is a valid one also without the HTML code.
I.e., <p>"{'a_url': '"<a href="http://example.com/a.json">http://example.com/a.json'}</a></p>

I think this is a small bug, please consider fixing it.
Thanks in advance.

Process instances - can't delete while running, but don't persist when restarted

While process definitions are persisted, when the docker-compose is restarted, we lose the information on the process instances.
At the same time, we cannot delete process instances from the frontend (afaik).

To reproduce this:

  1. docker-compose up ;
  2. Create a process;
  3. Start the process;
  4. Check the process instances, you'll see the process you just started;
  5. Go to the terminal where you started the docker-compose up, press ctrl-c or equivalent to quit;
  6. Run docker-compose down (not sure if this step is necessary to reproduce)
  7. docker-compose up ;
  8. Login again;
  9. Go to process definitions and process instances.

After this, the processes will be restored, but process instances won't.

Having a system where machines can be safely restarted without losing any information/state has a number of advantages:

  • it makes the system more resilient to failures in any environment - development, production, etc;
  • allows us to keep it contained without requiring an external database;
  • while at the same time, being able to perform changes on the system that require a restart and not losing anything.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.