Giter Club home page Giter Club logo

incubator-kie-kogito-serverless-operator's Introduction

SonataFlow Operator

The SonataFlow Operator defines a set of Kubernetes Custom Resources to help users to deploy SonataFlow projects on Kubernetes and OpenShift.

Please visit our official documentation to know more.

Available modules for integrations

If you're a developer, and you are interested in integrating your project or application with the SonataFlow Operator ecosystem, this repository provides a few Go Modules described below.

SonataFlow Operator Types (api)

Every custom resource managed by the operator is exported in the module api. You can use it to programmatically create any custom type managed by the operator. To use it, simply run:

go get github.com/kiegroup/kogito-serverless-workflow/api

Then you can create any type programmatically, for example:

workflow := &v1alpha08.SonataFlow{
ObjectMeta: metav1.ObjectMeta{Name: w.name, Namespace: w.namespace},
Spec:       v1alpha08.SonataFlowSpec{Flow: *myWorkflowDef>}
}

You can use the Kubernetes client-go library to manipulate these objects in the cluster.

You might need to register our schemes:

    s := scheme.Scheme
utilruntime.Must(v1alpha08.AddToScheme(s))

Container Builder (container-builder)

Please see the module's README file.

Workflow Project Handler (workflowproj)

Please see the module's README file.

Development and Contributions

Contributing is easy, just take a look at our contributors'guide.

Productization notes

In order to productize the Red Hat OpenShift Serverless Logic Operator read the notes into the productization'section.

incubator-kie-kogito-serverless-operator's People

Contributors

amygao9 avatar baldimir avatar davidesalerno avatar dependabot[bot] avatar desmax74 avatar dmartinol avatar domhanak avatar github-actions[bot] avatar jordigilh avatar jstastny-cz avatar kevin-mok avatar lcaparelli avatar marianmacik avatar masayag avatar mbiarnes avatar prakritishrivastava avatar r00ta avatar radtriste avatar rgdoliveira avatar ricardozanini avatar richardw98 avatar rodrigonull avatar sgitario avatar spolti avatar sutaakar avatar tchughesiv avatar vaibhavjainwiz avatar wmedvede avatar xieshenzh avatar yselkowitz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

incubator-kie-kogito-serverless-operator's Issues

Pod instances keep spawning and terminating when deploying the workflow

Describe the bug

When deploying the sonataflow-inference-demo CR, the kubernetes API server starts spawning multiple pods that end up being in terminating state seconds after they're spawned. At the end, only 1 pod remains in Running state as expected, but there are many pods still in Terminating state that are not cleaned up. This issue does not depend on the namespace: it happens in both default and a newly created namespace.

Sonataflow-inference-demo repository:

https://github.com/hbelmiro/sonataflow-inference-pipeline-demo

Steps to reproduce

Deploy the latest sonataflow operator version (use the code in main and not the latest image)
Change the base image referenced in the 01-sonataflow-platform.yaml to quay.io/ricardozanini/sonataflow-python-devmode:latest
Run the deploy.sh command
Expected result:

One pod is created that eventually reaches the Running/Ready state
Actual result:

Multiple pods are created that reach the Terminating state shortly, while only 1 reaches the Running,Ready state.

$>oc get pod
NAME READY STATUS RESTARTS AGE
pipeline-55bb5b889d-lz22x 0/1 Terminating 0 17s
pipeline-5c8bd7db86-dvqlw 0/1 Terminating 0 17s
pipeline-fc9b44cdb-jdwsv 1/1 Running 0 4m17s

Expected behavior

No response

Actual behavior

No response

How to Reproduce?

No response

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

Archive logs of failed E2E test cases in PR checks

Description

Whenever a E2E test fails in PR check, it is hard to determine what actually happened in case the test times out.

Goal of this issue is to get logs of deployment that the failed test uses and archive it in the workflow page for download.
This way we are able closely to debug issues with container startup.

Implementation ideas

In order to implement this we need to look at AfterAll() function of the tests and retrieve the logs from minikube.
Something like:
forEach failed test get the deployment and retrievelogs
kubectl get pods -n sonataflow-operator-system | grep relevant ones
kubectl logs -f orderprocessing-7899775db4-ff6nj -n sonataflow-operator-system > failed_test_case_1_depoyment_logs

Unable to update the Build CR Status with a Builder object

Describe the bug

It seems that we cannot update the Build CR Status with a specific Builder object and this is preventing us to reconcile a build with different configurations than the default one.

Expected behavior

If I pass a Builder object created from the container-builder package ( i.e the object created here https://github.com/kiegroup/kogito-serverless-operator/blob/be9838876a23ee1da214688433d5758cec0a2fdf/builder/builder.go#L179) to the manageStatusUpdate function (https://github.com/kiegroup/kogito-serverless-operator/blob/main/controllers/kogitoserverlessbuild_controller.go#L133) the Build CR Status object is not containing the desired builder object

Actual behavior

If I pass a specific Builder object to the manageStatusUpdate function (https://github.com/kiegroup/kogito-serverless-operator/blob/main/controllers/kogitoserverlessbuild_controller.go#L133), the updated Build CR Status object will contain the specific Builder object and we can retrieve it

How to Reproduce?

Steps to reproduce:

  1. kubectl apply -f config/samples/sw.kogito_v1alpha08_kogitoserverlessplatform_withCache.yaml -n kogito-workflows
  2. kubectl apply -f config/samples/sw.kogito_v1alpha08_kogitoserverlessworkflow.yaml -n kogito-workflows
  3. kubectl get KogitoServerlessBuild greeting -n kogito-workflows -o yaml

In the KogitoServerlessBuild CR Status some information coming from the container-builder package will miss (for example the Tasks and the KanikoTask).

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

Kogito Serverless Operator version or git rev

No response

Additional information

No response

Break the current operator's configuration into custom and managed properties

Description

Summary
Currently, the same {{ConfigMap}} holds managed (what we currently call "immutable") and custom properties defined by users.

A few application properties cannot be changed by users, being managed by the operator such as the HTTP port.

We should convert those properties to environment properties and set them to the deployment workflow. These env properties must be immutable and managed by the operator. Env properties override the application.properties mounted currently by the operator. See https://quarkus.io/guides/config-reference#configuration-sources

The current ConfigMap will be solely for user custom properties.

Acceptance criteria:

After a successful deployment of the SonataFlow instance ABC, the ABC-props ConfigMap is still owned by this instance but not updated by the Sonata Flow operator
All the immutable properties defined by the operator are mapped to env variables in the ABC Deployment owned by the SonataFlow instance (in the spec.template.spec.containers[0].env section)
If any variable defined in the spec.podTemplate.container.env section of the SonataFlow instance match one of the immutable variable names, the value mapped in the ABC Deployment will be the one defined by the operator
    In this case, an Event of type Warning is created for the SonataFlow instance

Implementation ideas

No response

Omit imagePullPolicy from the deployment created by Sonata operator

Describe the bug

The operator sets imagePullPoiicy to Always. This is against the default of openshift, which is ifNotPresent , and against
the k8s default which should be also ifNotPresent in case a tag or SHA is used.

Expected behavior

if there's a tag which is not 'latest' , then omit the imagePullPolicy completely

Actual behavior

imagePullPolicy is set in the generated deployment

How to Reproduce?

1.Deploy any sonataflow resource, e.g greeting and see the default

oc create -f https://raw.githubusercontent.com/rgolangh/serverless-workflows-helm/main/charts/workflows/charts/greeting/templates/01-sonataflow_greeting.yaml

  1. oc get deployments greeting -o jsonpath={.spec.template.spec.containers[]} | jq '.imagePullPolicy'

Output of uname -a or ver

k8s kind 0.20

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

https://issues.redhat.com/browse/FLPATH-892
No response

Workflow application is missing properties to post events to DataIndex service

Describe the bug

When the Data-Index service is generated by the SonataFlowPlatform instance, the workflow deployment is missing the following application properties:

  • mp.messaging.outgoing.kogito-processinstances-events.connector (set to quarkus-http)
  • mp.messaging.outgoing.kogito-processdefinitions-events.connector (set to quarkus-http)
  • mp.messaging.outgoing.kogito-processdefinitions-events. url (set to <DATA)_INDEX_SERVICE_URL>/definitions)

Expected behavior

No response

Actual behavior

No response

How to Reproduce?

No response

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

Enhance SonataFlowClusterPlatform.spec so it better communicates which settings are used by workflows cluster-wide

Description

Enhance the api to better communicate to the user which platform settings are being applied to workflows cluster-wide. If possible, allow the user to choose the type(s) of platform setting applied (services, build, etc). However, this should be limited/controlled by the operator. Only specific settings should be candidates. For example services and, potentially, build.

Implementation ideas

Create a new string array at spec.enabledSettings (or something similar) that defines which platform settings are being exposed to workflows cluster-wide. Leverage CRD validation to restrict this slice to certain exact values... in our case services, and potentially build in the future.

type SonataFlowClusterPlatformSpec struct {
	PlatformRef SonataFlowPlatformRef `json:"platformRef"`

	// +kubebuilder:validation:UniqueItems=true
	// +kubebuilder:validation:Enum=services;build
	EnabledSettings *[]string `json:"enabledSettings,omitempty"`
}
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowClusterPlatform
metadata:
  name: cluster-platform
spec:
  platformRef:
    name: sonataflow-platform
    namespace: central-platform-ns
  enabledSettings:
  - services
  - build (in the future?)

If spec.enabledSettings = nil, maybe we add services to the array by default?

Dependency issue on main

Describe the bug

When I try to:

go get github.com/apache/incubator-kie-kogito-serverless-operator/workflowproj@731b2d464dea

I get
...
go: downloading github.com/apache/incubator-kie-kogito-serverless-operator/workflowproj v0.0.0-20240122.0.20240216131720-731b2d464dea
go: downloading github.com/apache/incubator-kie-kogito-serverless-operator v0.0.0-20240122.0.20240216131720-731b2d464dea
...
go: github.com/apache/incubator-kie-kogito-serverless-operator/workflowproj imports
github.com/apache/incubator-kie-kogito-serverless-operator/controllers/profiles: cannot find module providing package github.com/apache/incubator-kie-kogito-serverless-operator/controllers/profiles

Full log: https://gist.github.com/ederign/edc010f39408d95eed0d3f768d38415c

Expected behavior

No response

Actual behavior

No response

How to Reproduce?

No response

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

[apache-kie-ci] Nighly deploy job fails

Describe the bug

FYI: kogito-serverless-operator deploy job broken in apache CI - https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/nightly/job/kogito-serverless-operator-deploy

Fails because of:

go: downloading golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3
/home/jenkins/jenkins-agent/workspace/KIE/kogito/main/nightly/kogito-serverless-operator-deploy/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./api/..." paths="./container-builder/api/..."
Error: err: exit status 1: stderr: go: errors parsing go.mod:
/home/jenkins/jenkins-agent/workspace/KIE/kogito/main/nightly/kogito-serverless-operator-deploy/bddframework/go.mod:5: unknown directive: toolchain

Goal is to fix the job to be green.

Expected behavior

Job succeeds

Actual behavior

Job fails

How to Reproduce?

Navigate to https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/nightly/job/kogito-serverless-operator-deploy
select latest build and click Rebuild from the options.

Opt-in deploy SonataFlow as Knative Serving service instead of k8s Deployments

Description

Currently, the operator deploys workflows as regular Kubernetes Deployments. In some use cases, deploying a workflow as a Knative Serving resource makes sense.

  • Assess the possibility of deploying workflows as Knative Serving services by default if Knative is installed in the cluster
  • Add the possibility for users to opt-in to deploying as a ksvc or k8s deployment by labeling the CR.
  • Once labeled and deployed, this attribute must be immutable. We may assess the possibility of changing this field later in the field
  • Initially, revisions handling is out of scope, the operator won't reconcile this field so that users can change it based on their requirements. An assessment of revisions in the platform can be done later

Implementation ideas

No response

Cleanup minikube related files and methods to Kind

Description

Currently there is a lots of remnants of minikube used for testing in several modules.

After #376 is done, should continue and remove BDD tests minikube references and aslo rename existing resources ot use kind instead of minikube keyword.

GOAL
No trace of minikube resources when unused.
No minkube keyword used in codebase when unused.

Implementation ideas

No response

Creation of Knative Sinks in SonataFlow CRD

Description

In the first version of knative eventing objs provision feature, we assume the sink pre-existed and even hardcoded 'default' as broker for triggers.

It is preferred to extend the flexibility for user to specify the configuration of sinks for events of their workflows. e.g. in dev profile, they can use inMemoryChannel while kafka and other types of brokers can be supported/configured in prod

Implementation ideas

No response

Define workflows behavior when a SonataflowPlatform CR is changed.

Description

Certain changes to the SonataflowPlatform CR might have an impact on workflows that are already deployed (or running) in the cluster.
Such change may include an update to the DataIndex service or a secret of the PostgreSQL.
The owner of the platform should be able to determine the desired behavior in case such a change occurs.

Today, there is a boolean flag Enabled for each service, that determines how prod-profile workflow should react when a service is enabled/disabled.

Implementation ideas

By replacing the Enabled with a rolling-strategy property, the admin will be able to define the desired behavior when a service at a platform level is changed.

Suggested options for rolling-strategy property could be:

  • Disabled - do not act when changes are made at a platform level. This may lead to out-of-sync between the runtime configuration of a workflow to their actual deployed values.
  • RollingUpdate - roll out the changes to the workflows.
  • Detached - workflows will not be using the services of this platform. (same semantics as the previous Enabled: false).

The discussion was initiated here

Workflow not registering to DataIndex if started simultaneously

Describe the bug

Related to #361

When starting the DI and a workflow in the same time, the workflow does not register in the DI

I do have the logs

2024-02-15 08:28:04,460 INFO  [io.sma.health] (executor-thread-1) SRHCK01001: Reporting health down status: {"status":"DOWN","checks":[{"name":"Data Index Availability - startup check","status":"DOWN","data":{"error":"[unknown] - io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: sonataflow-platform-data-index-service.sonataflow-infra/172.31.200.9:80"}},{"name":"SmallRye Reactive Messaging - startup check","status":"UP"}]}
2024-02-15 08:28:19,371 INFO  [io.sma.health] (executor-thread-1) SRHCK01001: Reporting health down status: {"status":"DOWN","checks":[{"name":"Data Index Availability - startup check","status":"DOWN","data":{"error":"[unknown] - io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: sonataflow-platform-data-index-service.sonataflow-infra/172.31.200.9:80"}},{"name":"SmallRye Reactive Messaging - startup check","status":"UP"}]}
2024-02-15 08:28:34,375 INFO  [io.sma.health] (executor-thread-1) SRHCK01001: Reporting health down status: {"status":"DOWN","checks":[{"name":"Data Index Availability - startup check","status":"DOWN","data":{"error":"[unknown] - io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: sonataflow-platform-data-index-service.sonataflow-infra/172.31.200.9:80"}},{"name":"SmallRye Reactive Messaging - startup check","status":"UP"}]}

No restarts

oc -n sonataflow-infra get pods
NAME                                                      READY   STATUS    RESTARTS   AGE
greeting-64c66ccdb7-ldmdr                                 1/1     Running   0          7m42s
sonataflow-platform-data-index-service-6676f74b48-258wf   1/1     Running   0          7m42s
sonataflow-platform-jobs-service-d9455b6f7-2v8c9          1/1     Running   0          7m42s
sonataflow-psql-postgresql-0                              1/1     Running   0          10m

and no greetings in our UI that is reading the data index
image

Here is the dump of the DB DI_dump.zip

From what I see and understand from the describe, the startupProbe

startupProbe:
      failureThreshold: 5
      httpGet:
        path: /q/health/started
        port: 8080
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 15
      successThreshold: 1
      timeoutSeconds: 3

Is that the pod will only restart after 5 failures and here we only have 3.

From the full log (see below), there are 2 errors related to publishing event on the DI when the workflow starts so it seems that the workflow is registering itself at startup and never after so if no restart, no registration

I tried to delete the DI pod to see if after its re-creation something changes but nothing, the greeting still not appears while other workflows created after the DI start are there.

FUll log of greeting:

oc -n sonataflow-infra logs  greeting-64c66ccdb7-ldmdr 
Starting the Java application using /opt/jboss/container/java/run/run-java.sh ...
INFO exec -a "java" java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -cp "." -jar /deployments/quarkus-run.jar 
INFO running in /deployments
__  ____  __  _____   ___  __ ____  ______ 
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ 
 -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/   
2024-02-15 08:27:48,983 WARN  [io.qua.config] (main) Unrecognized configuration key "kogito.data-index.health-enabled" was provided; it will be ignored; verify that the dependency extension for this configuration is set or that you did not make a typo
2024-02-15 08:27:48,984 WARN  [io.qua.config] (main) Unrecognized configuration key "kogito.jobs-service.health-enabled" was provided; it will be ignored; verify that the dependency extension for this configuration is set or that you did not make a typo
2024-02-15 08:27:48,984 WARN  [io.qua.config] (main) Unrecognized configuration key "kogito.data-index.url" was provided; it will be ignored; verify that the dependency extension for this configuration is set or that you did not make a typo
2024-02-15 08:27:48,984 WARN  [io.qua.config] (main) Unrecognized configuration key "kogito.jobs-service.url" was provided; it will be ignored; verify that the dependency extension for this configuration is set or that you did not make a typo
2024-02-15 08:27:49,846 WARN  [org.kie.kog.add.qua.kna.eve.KnativeEventingConfigSourceFactory] (main) K_SINK variable is empty or doesn't exist. Please make sure that this service is a Knative Source or has a SinkBinding bound to it.
2024-02-15 08:27:49,941 WARN  [io.qua.run.con.ConfigRecorder] (main) Build time property cannot be changed at runtime:
 - quarkus.devservices.enabled is set to 'false' but it is build time fixed to 'true'. Did you change the property quarkus.devservices.enabled after building the application?
2024-02-15 08:27:50,623 INFO  [org.kie.kog.add.qua.mes.com.QuarkusKogitoExtensionInitializer] (main) Registered Kogito CloudEvent extension
2024-02-15 08:27:50,673 INFO  [io.quarkus] (main) serverless-workflow-project 1.0.0-SNAPSHOT on JVM (powered by Quarkus 3.2.9.Final) started in 2.157s. Listening on: http://0.0.0.0:8080
2024-02-15 08:27:50,673 INFO  [io.quarkus] (main) Profile prod activated. 
2024-02-15 08:27:50,673 INFO  [io.quarkus] (main) Installed features: [cache, cdi, jackson-jq, kogito-addon-events-process-extension, kogito-addon-jobs-knative-eventing-extension, kogito-addon-knative-eventing-extension, kogito-addon-kubernetes-extension, kogito-addon-messaging-extension, kogito-addon-microprofile-config-service-catalog-extension, kogito-addon-process-management-extension, kogito-addon-source-files-extension, kogito-addons-quarkus-knative-serving, kogito-serverless-workflow, kubernetes, kubernetes-client, qute, reactive-routes, rest-client, rest-client-jackson, resteasy, resteasy-jackson, security, security-properties-file, smallrye-context-propagation, smallrye-health, smallrye-openapi, smallrye-reactive-messaging, smallrye-reactive-messaging-http, vertx]
2024-02-15 08:27:50,675 WARN  [io.sma.rea.mes.provider] (vert.x-eventloop-thread-7) SRMSG00234: Failed to emit a Message to the channel: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: sonataflow-platform-data-index-service.sonataflow-infra/172.31.200.9:80
Caused by: java.net.ConnectException: Connection refused
	at java.base/sun.nio.ch.Net.pollConnect(Native Method)
	at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
	at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946)
	at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:337)
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:840)

2024-02-15 08:27:50,676 ERROR [org.kie.kog.eve.pro.ReactiveMessagingEventPublisher] (vert.x-eventloop-thread-7) Error while publishing message org.eclipse.microprofile.reactive.messaging.Message$8@7f469c1a: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: sonataflow-platform-data-index-service.sonataflow-infra/172.31.200.9:80
Caused by: java.net.ConnectException: Connection refused
	at java.base/sun.nio.ch.Net.pollConnect(Native Method)
	at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
	at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946)
	at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:337)
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:840)

2024-02-15 08:28:04,460 INFO  [io.sma.health] (executor-thread-1) SRHCK01001: Reporting health down status: {"status":"DOWN","checks":[{"name":"Data Index Availability - startup check","status":"DOWN","data":{"error":"[unknown] - io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: sonataflow-platform-data-index-service.sonataflow-infra/172.31.200.9:80"}},{"name":"SmallRye Reactive Messaging - startup check","status":"UP"}]}
2024-02-15 08:28:19,371 INFO  [io.sma.health] (executor-thread-1) SRHCK01001: Reporting health down status: {"status":"DOWN","checks":[{"name":"Data Index Availability - startup check","status":"DOWN","data":{"error":"[unknown] - io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: sonataflow-platform-data-index-service.sonataflow-infra/172.31.200.9:80"}},{"name":"SmallRye Reactive Messaging - startup check","status":"UP"}]}
2024-02-15 08:28:34,375 INFO  [io.sma.health] (executor-thread-1) SRHCK01001: Reporting health down status: {"status":"DOWN","checks":[{"name":"Data Index Availability - startup check","status":"DOWN","data":{"error":"[unknown] - io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: sonataflow-platform-data-index-service.sonataflow-infra/172.31.200.9:80"}},{"name":"SmallRye Reactive Messaging - startup check","status":"UP"}]}

Expected behavior

I expect the workflow to register itself once the DI is avaible

Actual behavior

The workflow does not register to the DI once the DI is ready and reachable

How to Reproduce?

No response

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

E2E tests are failing in nightly deploy job

Describe the bug

New tests added in recent PRs need additional setup on Minikube as they seem to fail on nightly jobs[1].

[1] https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/nightly/job/kogito-serverless-operator.e2e.minikube/10/console

Goal
Switch the E2E nightly tests to Kind OR fix the setup for Minikube to work with new tests.

Expected behavior

No response

Actual behavior

No response

How to Reproduce?

No response

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

Turn off Quarkus build analytics message in Dev profile

Description

When we execute the dev profile for a workflow, in the console, we can see the message below asking to contribute with build analytics.
The workflow works fine, but the log looks a bit ugly. Maybe with a proper system property we turn off this message.

[INFO] writing file /home/kogito/serverless-workflow-project/target/generated-test-sources/open-api-stream/.openapi-generator/FILES
[INFO] Invoking compiler:3.11.0:testCompile (default-testCompile) @ serverless-workflow-project
[INFO] Changes detected - recompiling the module! :dependency
[INFO] Compiling 3 source files with javac [debug release 17] to target/test-classes

--- Help improve Quarkus ---

  • Learn more: https://quarkus.io/usage/
  • Do you agree to contribute anonymous build time data to the Quarkus community? (y/n and enter)
    [WARNING] Failed to collect user input for analytics
    java.util.NoSuchElementException: No line found
    at java.util.Scanner.nextLine (Scanner.java:1651)
    at io.quarkus.maven.DevMojo.lambda$execute$0 (DevMojo.java:431)
    at io.quarkus.analytics.ConfigService.lambda$userAcceptance$0 (ConfigService.java:79)
    at java.util.concurrent.CompletableFuture$AsyncSupply.run (CompletableFuture.java:1768)
    at java.util.concurrent.CompletableFuture$AsyncSupply.exec (CompletableFuture.java:1760)
    at java.util.concurrent.ForkJoinTask.doExec (ForkJoinTask.java:373)
    at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec (ForkJoinPool.java:1182)
    at java.util.concurrent.ForkJoinPool.scan (ForkJoinPool.java:1655)
    at java.util.concurrent.ForkJoinPool.runWorker (ForkJoinPool.java:1622)
    at java.util.concurrent.ForkJoinWorkerThread.run (ForkJoinWorkerThread.java:165)
    [info] [Quarkus build analytics] Didn't receive a valid user's answer: y or n. The question will be asked again next time.

Implementation ideas

No response

Rename "prod" profile to "preview"

Description

Calling "prod" profile might confuse users into thinking that the operator can deliver an immutable image ready for production use cases. While this can be true in simplistic scenarios, real use cases require much more than the simple embedded build platform that we currently ship with the operator.

On Kubernetes we rely on Kaniko, on OpenShift on a simple BuildConfig object. This setup is more for a "preview" or staging use cases where user wants a glimpse of what it will look like a lighter image and immutable workflow on their topology.

Real use cases must rely on external build tooling. An example can be seen in this repo: https://github.com/flows-examples/gitops

This proposal is to change the prod profile naming and internal modules to preview. The profile capable to analyze and deploy an external built image we will call internally to gitops and it's activated as long as there's a .spec.podTemplate.flowContainer.image valid attribute set.

Implementation ideas

Simply renaming and refactoring. No new features to add. Also, the guides must be updated accordingly.

Migrate e2e tests to BDD

Description

We currently have a new BDD test framework that can replace the default e2e. We understand that BDD is clearer and we have more control of the tested use cases. Additionally, there's a huge code base that can be reused and put things together in a way that can create new test cases without coding.

This work involves:

  • Migrate the existing e2e test cases to BDD using the feature� format
  • Migrate the existing CI to run the BDD instead of e2e
  • Remove the e2e folder from the project and any auxiliary functions

Implementation ideas

No response

GitHub Actions are not working after Apache migration

Describe the bug

Apache has a few restrictions when running GHA, you can see here: https://cwiki.apache.org/confluence/display/BUILDS/GitHub+Actions+status#GitHubActionsstatus-Security

Our actions must be adjusted to reflect this new reality.

Expected behavior

The actions should run.

Actual behavior

No response

How to Reproduce?

Just send a small PR to this repo.

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

[apache-ci] Nightly depoy job gets stuck when running `install.sh` for builder

Describe the bug

Nightly deploy job is stuck on:

09:50:19  #20 [operator-builder 14/14] RUN [ "sh", "-x", "/tmp/scripts/org.kie.kogito.app.builder/install.sh" ]
09:50:20  #20 0.717 + set -e
09:50:20  #20 0.720 + cd /workspace
09:50:20  #20 0.725 + CGO_ENABLED=0 GO111MODULE=on go build -trimpath -ldflags=-buildid= -a -o manager main.go
Cancelling nested steps due to timeout
19:37:59  Sending interrupt signal to process
19:38:04  #20 CANCELED

Node that was used is: builds33

[1] https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/nightly/job/kogito-serverless-operator-deploy/93/consoleFull

Expected behavior

No response

Actual behavior

No response

How to Reproduce?

No response

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

Support Multiple Knative Sinks

Description

currently due to the K_SINK var injection, we can only support one sinkbinding per deployment. To support the flexibility of multiple event destinations from producers within single deployment. We need to find out a way to support multiple sinks.

Implementation ideas

No response

Allow creating resources without specifying a namespace

Description

With existing code, a namespace is required to use the workflowproj handler.

However, in certain cases, the namespace will be determined at a later stage than the resources generated.
Certain resources don't require the namespace to be specified upfront (e.g. ConfigMap and SonataFlow CRs).
If the resource is created in a namespace other than the namespace of the destination, an attempt to create or to apply the resource will fail, e.g.:

→ oc apply -f 02-configmap_mtaanalysis-props.yaml -n sonataflow-infra 
error: the namespace from the provided object "default" does not match the namespace "sonataflow-infra". You must pass '--namespace=default' to perform this operation.
→ oc create -f 02-configmap_mtaanalysis-props.yaml -n sonataflow-infra 
error: the namespace from the provided object "default" does not match the namespace "sonataflow-infra". You must pass '--namespace=default' to perform this operation.

Implementation ideas

Allow resources to be generated without specifying a namespace, by omitting the namespace attribute from the target generated resources.
The namespace will be set by the admin when applying the CRs to the cluster, at the designated namespace.

Investigate why non-dev scenarios now requires JDBC configuration

Describe the bug

As reported in Slack:

for non-dev it did not require any quarkus.datasource.jdbc.url in the past, but now it fails because none is provided.

That's why we disabled the tests temporarily: #378

Expected behavior

Non-dev scenarios should not require persistence configuration

Actual behavior

Our ephemeral scenarios are failing since the image now apparently requires JDBC configuration.

How to Reproduce?

See the given test case #378

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

Create knative resources for DataIndex and JobService

Description

As depicted in https://sonataflow.org/serverlessworkflow/latest/use-cases/timeout-showcase-example.html#_architecture. there should be sinkbindings and triggers automatically created for the commute between workflow deployments and Dataindex/JobService

Particularly, the Sinkbindings and triggers are mandatory for Dataindex.
for Jobservice, those objs need to be created depending on time-based events detected in workflow payload including timeout and sleep as described here

Implementation ideas

No response

Avoid hardcoding resources elements required by the operator

Description

By default, when deploying the operator via bundle, OLM will create a configMap named sonataflow-operator-builder-config:

ConfigMapName = "sonataflow-operator-builder-config"

Not only this but there are also other places where the operator sets a "default" value that can possibly be overridden by an admin when installing the operator on different platforms.

We may introduce a ConfigMap to carry all this information and rely on hardcoded constants only when this info is not available. This configMap must be deployed with the manager within the same namespace and it's name should be bound to an env var on the manager's deployment.

Implementation ideas

  1. Add the hardcoded information in the Manager's ConfigMap
  2. Have the Manager's ConfigMap name bind into an env var on its deployment.

Container Builder PR checks are not working

Describe the bug

When running checks against the container-builder module, one can see the following error:

E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/s/systemd/libudev-dev_249.11-0ubuntu3.11_amd64.deb  404  Not Found [IP: 52.252.163.49 80]
Fetched 541 kB in 1s (797 kB/s)
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Error: Process completed with exit code 100.

The apt-get command is not aligned with other nodes, additionally, there's a fixed version of ubuntu used that should be replaced to.

Expected behavior

No response

Actual behavior

No response

How to Reproduce?

No response

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

Set a specific pod selector and label for SonataflowPlatform installed services

Describe the bug

With existing implementation, both data-index and job-service deployments share the same selector for the deployment and for k8s Service created for them.
The result isn't desired when both services are deployed in the same cluster on the same namespace.

Expected behavior

A specific selector should be set for each service.

Actual behavior

No response

How to Reproduce?

No response

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

e2e PR check is failing duo to recurrent 404 errors while installing packages

Describe the bug

e2e tests won't run since we have a setup problem while installing the packages via apt-get. It's not updating the current libraries and installing unnecessary packages.

Expected behavior

The e2e task to run

Actual behavior

Failing with:

E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/k/krb5/libkdb5-10_1.19.2-2ubuntu0.2_amd64.deb  404  Not Found [IP: 52.252.75.106 80]
E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/k/krb5/libkadm5srv-mit12_1.19.2-2ubuntu0.2_amd64.deb  404  Not Found [IP: 52.252.75.106 80]
E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/k/krb5/libkadm5clnt-mit12_1.19.2-2ubuntu0.2_amd64.deb  404  Not Found [IP: 52.252.75.106 80]
E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/k/krb5/krb5-multidev_1.19.2-2ubuntu0.2_amd64.deb  404  Not Found [IP: 52.252.75.106 80]
E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/k/krb5/libkrb5-dev_1.19.2-2ubuntu0.2_amd64.deb  404  Not Found [IP: 52.252.75.106 80]

How to Reproduce?

Just open a PR and see that the action will fail.

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

Data-index deployment failed to start with image

Describe the bug

Using the following SonataflowPlatform CR to deploy DI and JS:

→ oc get sonataflowplatform -n sonataflow-infra -o yaml
apiVersion: v1
items:
- apiVersion: sonataflow.org/v1alpha08
  kind: SonataFlowPlatform
  metadata:
    annotations:
      meta.helm.sh/release-name: orchestrator
      meta.helm.sh/release-namespace: orchestrator
    creationTimestamp: "2024-01-09T10:55:09Z"
    generation: 2
    labels:
      app.kubernetes.io/managed-by: Helm
    name: sonataflow-platform
    namespace: sonataflow-infra
    resourceVersion: "39408746"
    uid: 6ca1122a-e03f-4458-98d7-eb56b0afb244
  spec:
    build:
      config:
        baseImage: quay.io/kiegroup/kogito-swf-builder-nightly:latest
        registry: {}
        strategy: platform
        strategyOptions:
          KanikoBuildCacheEnabled: "true"
          KanikoPersistentVolumeClaim: sonataflow-platform
        timeout: 5m0s
      template:
        resources:
          limits:
            cpu: 500m
            memory: 1Gi
          requests:
            cpu: 250m
            memory: 64Mi
        timeout: 0s
    devMode: {}
    services:
      dataIndex:
        enabled: true
        persistence:
          postgresql:
            secretRef:
              name: sonataflow-psql-postgresql
              passwordKey: postgres-password
              userKey: postgres-username
            serviceRef:
              name: sonataflow-psql-postgresql
              namespace: sonataflow-infra
        podTemplate:
          container:
            resources:
              limits:
                cpu: 500m
                memory: 1Gi
              requests:
                cpu: 100m
                memory: 512Mi
      jobService:
        enabled: true
        persistence:
          postgresql:
            secretRef:
              name: sonataflow-psql-postgresql
              passwordKey: postgres-password
              userKey: postgres-username
            serviceRef:
              name: sonataflow-psql-postgresql
              namespace: sonataflow-infra
        podTemplate:
          container:
            resources: {}
  status:
    cluster: openshift
    conditions:
    - lastUpdateTime: "2024-01-09T10:55:10Z"
      status: "True"
      type: Succeed
    info:
      goOS: linux
      goVersion: go1.19.9
    observedGeneration: 2
    version: "0.8"
kind: List
metadata:
  resourceVersion: ""

Ends with job-service running successfully, however, the data-index pod fails to start.

The image detected by the operator for DI is: quay.io/kiegroup/kogito-data-index-postgresql:latest
However, nightly image quay.io/kiegroup/kogito-data-index-postgresql-nightly:latest works nicely if specified specifically via

      podTemplate:
         container:
-          image: "quay.io/kiegroup/kogito-data-index-postgresql-nightly:latest"

Expected behavior

The image for DI recommended by the operator should work.

Actual behavior

The image detected by the operator for DI is: quay.io/kiegroup/kogito-data-index-postgresql:latest

→ oc logs -n sonataflow-infra deploy/sonataflow-platform-data-index-service -f
__  ____  __  _____   ___  __ ____  ______ 
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ 
 -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/   
2024-01-09 10:58:13,805 WARN  [io.qua.config] (main) Unrecognized configuration key "quarkus.kogito.devservices.enabled" was provided; it will be ignored; verify that the dependency extension for this configuration is set or that you did not make a typo
2024-01-09 10:58:19,711 WARN  [io.qua.run.con.ConfigRecorder] (main) Build time property cannot be changed at runtime:
 - quarkus.devservices.enabled is set to 'false' but it is build time fixed to 'true'. Did you change the property quarkus.devservices.enabled after building the application?
2024-01-09 10:58:21,833 INFO  [org.fly.cor.int.lic.VersionPrinter] (main) Flyway Community Edition 9.11.0 by Redgate
2024-01-09 10:58:21,834 INFO  [org.fly.cor.int.lic.VersionPrinter] (main) See what's new here: https://flywaydb.org/documentation/learnmore/releaseNotes#9.11.0
2024-01-09 10:58:21,834 INFO  [org.fly.cor.int.lic.VersionPrinter] (main) 
2024-01-09 10:58:23,416 INFO  [org.fly.cor.int.dat.bas.BaseDatabaseType] (main) Database: jdbc:postgresql://sonataflow-psql-postgresql.sonataflow-infra:5432/sonataflow (PostgreSQL 15.4)
2024-01-09 10:58:24,120 ERROR [io.qua.run.Application] (main) Failed to start application (with profile [http-events-support]): org.flywaydb.core.api.exception.FlywayValidateException: Validate failed: Migrations have failed validation
Migration checksum mismatch for migration version 1.32.0
-> Applied to database : 1722286283
-> Resolved locally    : 1406353711
Either revert the changes to the migration, or run repair to update the schema history.
Migration checksum mismatch for migration version 1.44.0
-> Applied to database : 799676352
-> Resolved locally    : 1679365749
Either revert the changes to the migration, or run repair to update the schema history.
Detected applied migration not resolved locally: 1.45.0.0.
If you removed this migration intentionally, run repair to mark the migration as deleted.
Detected applied migration not resolved locally: 1.45.0.1.
If you removed this migration intentionally, run repair to mark the migration as deleted.
Detected applied migration not resolved locally: 1.45.0.2.
If you removed this migration intentionally, run repair to mark the migration as deleted.
Need more flexibility with validation rules? Learn more: https://rd.gt/3AbJUZE
	at org.flywaydb.core.Flyway.lambda$migrate$0(Flyway.java:134)
	at org.flywaydb.core.FlywayExecutor.execute(FlywayExecutor.java:204)
	at org.flywaydb.core.Flyway.migrate(Flyway.java:128)
	at io.quarkus.flyway.runtime.FlywayRecorder.doStartActions(FlywayRecorder.java:82)
	at io.quarkus.deployment.steps.FlywayProcessor$startActions1770701860.deploy_0(Unknown Source)
	at io.quarkus.deployment.steps.FlywayProcessor$startActions1770701860.deploy(Unknown Source)
	at io.quarkus.runner.ApplicationImpl.doStart(Unknown Source)
	at io.quarkus.runtime.Application.start(Application.java:101)
	at io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:108)
	at io.quarkus.runtime.Quarkus.run(Quarkus.java:71)
	at io.quarkus.runtime.Quarkus.run(Quarkus.java:44)
	at io.quarkus.runtime.Quarkus.run(Quarkus.java:124)
	at io.quarkus.runner.GeneratedMain.main(Unknown Source)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at io.quarkus.bootstrap.runner.QuarkusEntryPoint.doRun(QuarkusEntryPoint.java:61)
	at io.quarkus.bootstrap.runner.QuarkusEntryPoint.main(QuarkusEntryPoint.java:32)

How to Reproduce?

No response

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

57b1f038ce9af5d92be1337fa7dbb9ea6730f524

Additional information

No response

List of pods for the operator Deployment is wrong

Describe the bug

When we deploy the operator in a cluster where other operators are installed, the list of pods for the sonataflow-operator-controller-manager Deployment may contain pods that are not related to this operator.
The query performed to fetch the matching pods is including pods that are not managed by the SonataFlow operator, because it's using a filter on a label that is also adopted by others, as in:

https://<CLUSTER_APISERVER>/api/kubernetes/api/v1/namespaces/openshift-operators/pods?limit=250&labelSelector=control-plane%3Dcontroller-manager&cluster=local-cluster

For instance, this is an example of the same query performed using the CLI:

% oc get pods -l control-plane=controller-manager
NAME                                                      READY   STATUS    RESTARTS         AGE
argocd-operator-controller-manager-dfc7f9499-jx4cw        1/1     Running   37 (2d17h ago)   14d
patterns-operator-controller-manager-59dbb77564-s4mpt     2/2     Running   0                41h
sonataflow-operator-controller-manager-6946b46b76-vb6hk   2/2     Running   0                39s

Expected behavior

No response

Actual behavior

No response

How to Reproduce?

No response

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

Application should be restarted when an environment variable value is updated

Description

Steps:
Deploy a SonataFlow a with a ConfigMap a-props including the user's application properties. Some properties are defined using env variables like ${ENV_VAR:default}. Another ConfigMap a-config defines the variable's values and is linked to the application using the spec.podTemplate.container.envFrom.configMapRef.name field of the SonataFlow resource.

After a successful deployment, verify the value of the env vars from the Pod's terminal.
Finally, update the vars value in the a-config ConfigMap.

Expected behavior:
The application should reflect the new env values (either because the Pod is restarted or because the Quarkus application restarts).

Actual behavior:
The application does not reflect the latest values of the injected env vars

Note:
There may be multiple ConfigMaps or Secrets from which the application takes the env vars values (using either the envFrom items or the env.valueFrom options)

Implementation ideas

No response

Operator service discovery is not activated for the dev profile

Describe the bug

The service discovery is basically doing nothing when we are in the dev profile, this is becase it was not activated yet for it.
We must just activate it.

Expected behavior

No response

Actual behavior

No response

How to Reproduce?

No response

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

Move workflows execution JDK to java17

Describe the bug

While the sw-builder was moved to jdk17 and the code is thus property generated for java 17, the runtime currently applied by the workflows execution is still java 11.

Expected behavior

No response

Actual behavior

No response

How to Reproduce?

No response

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

allow user customization on generated knative resources

Description

in #350, a central default broker is used for all sinkbindings and triggers. It is expected that user can have the flexibility to manage the config of generated knative resources from Sonataflow/SonataflowPlatform CR

Implementation ideas

as suggested here. user should be able to specify different brokers for events:
`spec:
events:
sink:
... # YOUR PROPOSAL
triggers:

  • event:
    broker: `

where all the options can still have a defined defaults from spec.sink

Temporary use of the nightly images

Describe the bug

The selector produced for the jobs service and data index service, is the same "sonataflow-platform". Which makes not possible for each service to pick the corresponding pods, etc.

Expected behavior

No response

Actual behavior

No response

How to Reproduce?

No response

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

Fix Jenkins Job to build and deploy nightly images

Describe the bug

Currently, Apache Jenkins has not been working since the migration from kiegroup. After an initial assessment, we need the following:

  1. Remove OpenShift tests since we don't have a cluster yet (we may add this later once we have a cluster to run)
  2. Assess the tooling to build the operator image in the current infrastructure
  3. Change the build stage to use the tooling we have in Apache Jenkins

Expected behavior

kogito-serverless-operator-deply job to work on Apache Jenkins: https://ci-builds.apache.org/job/KIE/job/kogito/job/main/job/nightly/job/kogito-serverless-operator-deploy

Actual behavior

See description.

How to Reproduce?

No response

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

Convert the Knative properties to immutable properties in deployment

Description

This work depends on the completion of the task of Immutable env var for the properties (SRVLOGIC-195  and SRVLOGIC-196)

currently, all the properties(application.properties) are preserved and reconciled in the workflow's configmap.  once the immutable properties are converted to env var config in deployment. we should have the corresponding update for knative related properties

Implementation ideas

No response

Production profile build failing with a configured subflow

Describe the bug

Steps
In OpenShift, deploy a SonataFlow instance test with prod profile and include a subflow with a user defined ConfigMap called test-subflow.
Mount the ConfigMap in the SonataFlow instance using:

spec:
  resources: 
    configMaps:
      - configMap:
          name: escalation-subflow 

Expected behavior

The Build generates an image building the test SWF and the test-subflow SWF, and all the mounted user properties (if any)

Actual behavior

The build fails with an error:

NoSuchFileException:
            /home/kogito/serverless-workflow-project/src/main/resources/specs/jira.yaml

Looking into the Pod, the expected folder structure is missing>

sh-4.4# ls /home/kogito/serverless-workflow-project/resources
ls: cannot access '/home/kogito/serverless-workflow-project/resources': No such file or directory
sh-4.4# ls /home/kogito/
ls: cannot access '/home/kogito/': No such file or directory

How to Reproduce?

No response

Output of uname -a or ver

No response

Golang version

No response

Operator-sdk version

No response

SonataFlow Operator version or git rev

No response

Additional information

No response

Enable the sending of the process definition event when the data-index is present

Description

Similar to the process instance events, at workflow deployment time, if a data-index configuration is detected in current platform the sending of the process definitions events must enabled and properly configured for that service.

Note: if no SonataFlowPlatform managed data-index is present, but user has provided the configuration to send those events, it'll will be kept to ensure users can still configure interaction with a non managed data-index deployment.

Implementation ideas

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.