Giter Club home page Giter Club logo

siddhi-operator's Introduction

Getting Started

Siddhi Operator allows you to run stream processing logic directly on a Kubernetes cluster. To use it, you need to be connected to a cloud environment or to a local cluster created for development purposes.

Prerequisites

Running the Operator

  • Kubernetes v1.10.11+
  • kubectl version v1.11.3+

Building the Operator

Configure Kubernetes Cluster

Local Deployment

Minikube

Refer Minikube Installation Guide to setup a local kubernetes cluster with Minikube.

Docker for Mac

Refer Docker for Mac Installation Guide setup a local kubernetes cluster with Docker for Mac.

Google Kubernetes Engine Cluster

Make sure you apply configuration settings for your GKE cluster before installing Siddhi Operator.

Enable the NGINX Ingress controller

The Siddhi Operator resource uses the NGINX Ingress Controller to expose the deployments to the external traffic.

In order to enable the NGINX Ingress controller in the desired cloud or on-premise environment, please refer the official documentation, NGINX Ingress Controller Installation Guide.

Supported Version: nginx 0.22.0+

Enable NATS Server and NATS Streaming Server

The distributed deployment of a Siddhi app uses NATS as the intermediate messaging system. The distributed deployment creates partial Siddhi apps and each partial Siddhi apps connected using NATS.

The Siddhi operator supports NATS operator v0.5.0+ and NATS streaming operator v0.2.2+.

Note that if your Kubernetes version is v1.16 or higher, then use the NATS streaming operator v0.3.0+ versions. If your Kubernetes version is less than v1.16, then you have to use NATS streaming operator v0.2.2 version. The reason for this version incompatibility is Kubernetes v1.16 was removed the apps/v1beta2 API group.

Install Siddhi Operator in Kubernetes cluster

  1. Clone Siddhi Operator Git repository.
    git clone https://github.com/siddhi-io/siddhi-operator.git

  2. Execute the following commands to setup the Siddhi Operator in the kubernetes cluster.

     kubectl apply -f ./deploy/siddhi_v1alpha2_siddhiprocess_crd.yaml
     kubectl apply -f ./deploy/service_account.yaml
     kubectl apply -f ./deploy/role.yaml
     kubectl apply -f ./deploy/role_binding.yaml
     kubectl apply -f ./deploy/operator.yaml

Testing a sample

  1. Execute the below command to create a sample siddhi deployment.
    kubectl apply -f ./deploy/examples/example-stateless-log-app.yaml

    Siddhi Operator would create a Siddhi-Runner deployment with the Siddhi app deployed through the example-siddhi-app CRD, a service, and an ingress to expose the http endpoint which is in the Siddhi sample.

    $ kubectl get SiddhiProcesses
    NAME              STATUS    READY     AGE
    power-surge-app   Running   1/1       2m
    
    $ kubectl get deployment
    NAME                READY     UP-TO-DATE   AVAILABLE   AGE
    power-surge-app-0   1/1       1            1           2m
    siddhi-operator     1/1       1            1           2m
    
    $ kubectl get service
    NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
    kubernetes          ClusterIP   10.96.0.1       <none>        443/TCP    2d
    power-surge-app-0   ClusterIP   10.96.44.182    <none>        8080/TCP   2m
    siddhi-operator     ClusterIP   10.98.78.238    <none>        8383/TCP   2m
    
    $ kubectl get ingress
    NAME      HOSTS     ADDRESS     PORTS     AGE
    siddhi    siddhi    10.0.2.15   80        2m

ℹī¸ Note: The Siddhi Operator automatically creates an ingress and exposes the internal HTTP/HTTPS endpoints available in the Siddhi App by default. In order to disable the automatic ingress creation, you have to change the autoIngressCreation value in the Siddhi siddhi-operator-config config map to false or null.

  1. Obtain the external IP (EXTERNAL-IP) of the Ingress resources by listing down the Kubernetes Ingresses.

    kubectl get ing

  2. Add the above host (siddhi) as an entry in /etc/hosts file.

  3. Use following CURL command to publish an event to the sample Siddhi app that's deployed.

    curl -X POST \
    http://siddhi/power-surge-app-0/8080/checkPower \
    -H 'Accept: */*' \
    -H 'Content-Type: application/json' \
    -H 'Host: siddhi' \
    -d '{
       "deviceType": "dryer",
       "power": 60000
    }'  
  4. View the logs of the Siddhi Runner pod and observe the entry being printed by the Siddhi sample app accepting event through the http endpoint.

    $ kubectl get pods
    
    NAME                                       READY     STATUS    RESTARTS   AGE
    power-surge-app-0-646c4f9dd5-rxzkq         1/1       Running   0          4m
    siddhi-operator-6698d8f69d-6rfb6           1/1       Running   0          4m
    
    $ kubectl logs power-surge-app-0-646c4f9dd5-rxzkq
    
    ...
    [2019-07-12 07:12:48,925]  INFO {org.wso2.transport.http.netty.contractimpl.listener.ServerConnectorBootstrap$HttpServerConnector} - HTTP(S) Interface starting on host 0.0.0.0 and port 9443
    [2019-07-12 07:12:48,927]  INFO {org.wso2.transport.http.netty.contractimpl.listener.ServerConnectorBootstrap$HttpServerConnector} - HTTP(S) Interface starting on host 0.0.0.0 and port 9090
    [2019-07-12 07:12:48,941]  INFO {org.wso2.carbon.kernel.internal.CarbonStartupHandler} - Siddhi Runner Distribution started in 6.853 sec
    [2019-07-12 07:17:22,219]  INFO {io.siddhi.core.stream.output.sink.LogSink} - LOGGER : Event{timestamp=1562915842182, data=[dryer, 60000], isExpired=false}

Please refer the Siddhi documentation for more details about the Siddhi application deployment in Kubernetes.

Build from Source

Build the Operator

Clone the operator source repository by executing the below commands.

$ mkdir $GOPATH/src/github.com/siddhi-io
$ cd $GOPATH/src/github.com/siddhi-io
$ git clone https://github.com/siddhi-io/siddhi-operator.git

Build the operator by executing the below command. Replace DOCKER_REGISTRY_URL with your private/public docker repository URL where you'll be hosting the Siddhi Operator image.

$ operator-sdk build <DOCKER_REGISTRY_URL>/<USER_NAME>/siddhi-operator:<TAG>

Push the operator as follow.

$ docker push <DOCKER_REGISTRY_URL>/<USER_NAME>/siddhi-operator:<TAG>

Change image name of the operator.yaml file.

$ sed -i 's|docker.io/siddhiio/siddhi-operator:*|<DOCKER_REGISTRY_URL>/<USER_NAME>/siddhi-operator:<TAG>|g' deploy/operator.yaml

Now you can install the operator as describe in previous installation section.

Test the Operator

Unit Tests

Execute the below command to start the unit tests.

$ go test ./pkg/controller/siddhiprocess/<PACKAGE_NAME>

For example, run the unit tests for package artifact.

$ go test ./pkg/controller/siddhiprocess/artifact

E2E Tests

If you have manually made any changes to the Operator, you can verify its functionality with the E2E tests. Execute the below commands to set up the needed infrastructure for the test-cases.

  1. It is recommended to create a separate namespace to test the operator. To do that use the following command.

    $ kubectl create namespace operator-test
  2. After that, you need to install the NATS operator and NATS streaming operator in the operator-test namespace. To do that please refer this documentation.

  3. Then you have to set up the siddhi-operator in operator-test namespace using following commands.

    $ kubectl apply -f ./deploy/siddhi_v1alpha2_siddhiprocess_crd.yaml --namespace operator-test
    $ kubectl apply -f ./deploy/service_account.yaml --namespace operator-test
    $ kubectl apply -f ./deploy/role.yaml --namespace operator-test
    $ kubectl apply -f ./deploy/role_binding.yaml --namespace operator-test
    $ kubectl apply -f ./deploy/operator.yaml --namespace operator-test
  4. Finally, test the operator using following command.

    $ operator-sdk test local ./test/e2e --namespace operator-test --no-setup

For more details about operator sdk tests, refer this.

siddhi-operator's People

Contributors

buddhiwathsala avatar maheshika avatar minudika avatar mohanvive avatar niveathika avatar pcnfernando avatar suhothayan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

siddhi-operator's Issues

Automatic NATS creation for siddhi

Description:
Siddhi operator currently supports for basic siddhi app deployments like HTTP. The failover deployment scenario of the siddhi operator should depend on with NATS or KAFKA. In that scenario, the user can specify a NATS configuration using the CR YAML file like below.

messaging.system:
  type: nats
  config:
    cluster.id: siddhi-nats
    bootstrap.servers: nats://siddhi-nats:4222

However it's much better to make this configuration optional and whenever user did not specify this, the siddhi operator will create the NATS cluster and the NATS streaming cluster automatically. That feature will give better user experience.

Suggested Labels:
type/new-feature

Initial status of the SiddhiProcess becomes empty for a while

Description:
When we deploy a SiddhiProcess YAML at the initial step the status of the SiddhiProcess becomes empty. After that, it moves through general state change Pending -> Running. Though it would be better to have this pending state right after the kubectl installation of the SiddhiProcess.

Suggested Labels:
Type improvement

Affected Product Version:
0.2.0-alpha

Deployment config read as YAML instead of a string

Description:
Currently, the custom object of siddhi operator reads the configurations that need to change at the siddhi runner level using a string as below.

siddhi.runner.configs: |-
    state.persistence:
      enabled: true
      intervalInMin: 1
      revisionsToKeep: 2
      persistenceStore: io.siddhi.distribution.core.persistence.FileSystemPersistenceStore
      config:
        location: siddhi-app-persistence

It is much better to have direct YAML reading instead of this string. In order to enable that YAML reading, we have to read arbitrary YAML in operator level.

In golang, we normally read arbitrary JSON objects using structures like below.

map[string]interface{}

But because of this issue in operator sdk, we are having a problem of reading an arbitrary YAML.

Suggested Labels:
type/improvement

Related Issues:
operator-framework/operator-sdk#1624

Handle Siddhi applications being updated as a config map

Description:
When Siddhi application is configured as a config map in Siddhi custom resource, during an update to the config map, the update flow of the Siddhi operator will not be triggered since the Siddhi custom resource was not updated, but one of its specification reference was updated.

We need to explicitly handle this case where we have to listen to changes in the object reference being used in Siddhi CRD under

apps:
  configMap: <appName>

Related Issues:
#31
#42
#62

Ingress NGINX does not support for 0.22.0+

Description:
Siddhi operator only supports ingress NGINX controller versions up to 0.21.0. When we use NGINX controllers above the 0.21.0 version it will give a 404 not found like below.

10.8.0.1 - [10.8.0.1] - - [25/Jun/2019:10:43:27 +0000] "POST /nats-app-1/8080/example HTTP/1.1" 404 0 "-" "PostmanRuntime/7.15.0" 346 0.007 [default-nats-app-1-8080] 10.8.0.54:8080 0 0.007 404 49455455f57fb87af2a04206caaf0f30

Suggested Labels:
type/improvement

Affected Product Version:
0.2.0-m1

OS, DB, other environment details and versions:
nginx controller 0.23.0+

Steps to reproduce:

  • Install siddhi operator
  • Install NGINX controller 0.22.0+ version
  • Install a siddhi app which has HTTP source

Service Implementation to create distributed partial Siddhi apps

Description:
There should be service which could split a Siddhi app and create partial Siddhi apps by adding necessary messaging system details. Here service should split the Siddhi app based on the annotations used in the queries. This is required for distributed siddhi deployment.

Appending rules to the ingress instead of updating HTTP paths

Description:
When we deploy SiddhiProcesses which has HTTP sources ingress will automatically create and add rules. That ingress use hostname as siddhi. But when we deploy several SiddhiProcesses with HTTP sources, that ingress will add multiple rules to the ingress as below.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
  name: siddhi
  namespace: default
spec:
  rules:
  - host: siddhi
    http:
      paths:
      - backend:
          serviceName: nats-app-0
          servicePort: 8080
        path: /nats-app-0/8080/
  - host: siddhi
    http:
      paths:
      - backend:
          serviceName: nats-app1-0
          servicePort: 8080
        path: /nats-app1-0/8080/

Here siddhi-operator add multiple rules with siddhi hostname. But it is better to add paths under the same host without replicating rules like below.

spec:
  rules:
  - host: siddhi
    http:
      paths:
      - backend:
          serviceName: nats-app-0
          servicePort: 8080
        path: /nats-app-0/8080/
      - backend:
          serviceName: nats-app1-0
          servicePort: 8080
        path: /nats-app1-0/8080/

Suggested Labels:
Improvement

Affected Product Version:
Siddhi operator 0.1.0 - 0.2.0-m1

Steps to reproduce:

  1. Install siddhi operator
  2. Install two SiddhiProcesses which has HTTP source.

PVC creation support

Description:
In the failover deployment scenario, the siddhi applications should maintain the persistence in two levels.

  1. Messaging system (NATS/KAFKA)
  2. Siddhi application level

The messaging system level persistence will handle by NATS or KAFKA. In order to handle siddhi application level persistence, siddhi app deployment needs a persistence volume. The creation of PV is an infrastructure configuration. Therefore the user should have to create a PV manually. However, the creation of PVC and bound it to the deployment pod can be done by siddhi operator.

To do that user can specify PV configurations like below and relevant PVC should be created by the siddhi operator according to this configs.

persistence.volume:
  access.modes:
    - ReadWriteOnce
  volume.mode: Filesystem
  storageClassName: standard
  resources:
    requests:
      storage: 1Gi

Suggested Labels:
type/new-feature

Operation cannot be fulfilled on deployments.apps the object has been modified issue

Description:
When we deployed a SiddhiProcess custom object, it may sometimes return an error like below.

{"level":"error","ts":1563786102.7313259,"logger":"siddhi","msg":"Operation cannot be fulfilled on deployments.apps \"power-consume-app-1\": the object has been modified; please apply your changes to the latest version and try again","Request.Namespace":"default","Request.Name":"power-consume-app","error":"Operation cannot be fulfilled on deployments.apps \"power-consume-app-1\": the object has been modified; please apply your changes to the latest version and try again"

This error occurs due to this issue in controller runtime.

This error occurs due to the multiple create and update events that go to the event queue. And reconcile loop tries to execute each event even after the successful creation of the deployment.

Suggested Labels:
type/fix

Affected Product Version:
Siddhi operator 0.2.0-m1

OS, DB, other environment details and versions:

Steps to reproduce:

Related Issues:
kubernetes-sigs/controller-runtime#403

Disable automatic ingress creation not working

Description:
The Siddhi operator had that functionality to disable automatic ingress creation. It did not work even we configure it in the siddhi-operator-config config map.

As a solution, we have to remove that created ingress afterward if we do not need that ingress anymore.

Suggested Labels:
Bug report

Affected Product Version:
0.2.0

OS, DB, other environment details and versions:
minikube version: v1.4.0

The server could not find the requested resource in Docker for Mac

Description:
When we deployed the siddhi-operator in Docker for Mac kubernetes cluster, it gives an error as below.

"msg":"Failed to update SiddhiProcess status","Request.Namespace":"default","Request.Name":"monitor-app","error":"the server could not find the requested resource (put siddhiprocesses.siddhi.io monitor-app)"

I tried this in minikube along with kubernetes version v1.10.11 but it gives the same issue. Which means the error happens due to the kubernetes version. But when I run minikube with enabling CustomResourceSubresources=true then it works fine.

minikube start --kubernetes-version=v1.10.11 --feature-gates=CustomResourceSubresources=true

Currently docker for mac does not support for switching between kubernetes versions or enable feature gates. Refer this issue.

Specifications

  • Kubernetes: v1.10.11
  • Docker for Mac: Version 2.0.0.3

*v1.ConfigMap ended with: too old resource version in operator logs

Description:
When siddhi operator runs in production environments the logs of the siddhi-operator pod prints unwanted logs as below.

W0708 07:36:11.095231       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ConfigMap ended with: too old resource version: 858892 (859770)
W0708 07:41:20.112375       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ConfigMap ended with: too old resource version: 860069 (860803)
W0708 07:49:48.123026       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ConfigMap ended with: too old resource version: 861098 (862497)
W0708 07:59:13.139714       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ConfigMap ended with: too old resource version: 862793 (864380)
W0708 08:08:14.160095       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.ConfigMap ended with: too old resource version: 864679 (866186)

This logs print repeatedly but cannot see any errors in the kubernetes deployments that create by the operator.

Suggested Labels:
type/fix

Affected Product Version:
siddhi operator version 0.1.0 to 0.2.0-m1

OS, DB, other environment details and versions:
minikube version: v0.33.1 and GKE

Steps to reproduce:

  1. Install the operator and create a custom object
  2. After a while operator log starts to print it.

Use manually built Siddhi Runner image's dependencies for passing Siddhi App

Description:
At the moment, we are using Siddhi Parser service to create partial Siddhi apps to be distributed based on the state of the queries by the Siddhi Kubernetes Operator.
This service furthermore, provides information about whether we need to create ingresses to expose the endpoints based on the source types.
In order to retrieve this information, within parser service, we compile the user-given Siddhi app creating a SiddhiAppRuntime. In order to successfully create a SiddhiAppRuntime for the Siddhi app, we need to have all extension JARs in Siddhi Parser's classpath.

During the 0.2.0-m1 Siddhi Parser release, we included all the jars that were bundled in the distribution's vanilla version(5.1.0-m1).
Let's assume that a user creates a custom extension and creates a Siddhi-Runner docker image manually and use it within the Siddhi custom resource. In this scenario, the Siddhi parser service would fail since its classpath does not have the custom JAR.

As a solution, we need to use all the JARs that are bundled in the Siddhi Runner docker image provided through the Siddhi Custom resource or configured as an Operator configuration.
To achieve this, we are going to move the Siddhi Parser component to the Siddhi Runner distribution itself and expose its functionality through a script.
From this approach, the Siddhi Operator can run the Siddhi Parser script in Siddhi Runner image and get the distributed partial Siddhi apps and other needed information to be used during deployment from it.

Affected Product Version:
0.2.0-m1

Setting Readiness and Liveness probes to siddhi deployments

Description:
In previous releases(0.1.0 to 0.2.0-m1) siddhi deployments did not set the kubernetes readiness and liveness probes. By default, siddhi runner exposes http://0.0.0.0:9090/health to check the health of the siddhi runner deployment.

In failover scenarios, it would be better to have these two probes in order to ensure high availability. The configurations of the suggested readiness and liveness probes like below.

Liveness:  http-get http://:9090/health delay=120s timeout=1s period=120s 
Readiness: http-get http://:9090/health delay=60s timeout=1s period=20s 

Integrate distributed deployment support in Siddhi Operator

Description:
Siddhi distributed deployment is achieved using the annotations used in the queries. Based on the annotation used in the Siddhi app, it is divided in to multiple child Siddhi apps. A service implementation used to split the siddhi apps in to multiple child siddhi apps (#22). This deployment need to be integrated with the Siddhi operator implementation.

Missing the Readiness and Liveness probes after deployment update

Description:
After the creation of deployment, the Readiness and Liveness probes were missing. The reason is when creating a deployment we are using CreateOrUpdate function because we need to gracefully update the K8s artifacts when there is a spec change.

The MutateFunction used to gracefully update artifacts. The MutateFunction of the deployment was not explicitly set those readiness and liveness probes. Therefore it takes default values for readiness and liveness probes. Since the default values are empty those probes become empty too.

Suggested Labels:
Bug report

Affected Product Version:
0.2.0

Failed to list resources in GKE

Description:
When siddhi operator running in GKE it shows an error as below.

E0707 07:57:28.673477       1 reflector.go:134] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: Failed to list *v1.ConfigMap: Get https://10.11.240.1:443/api/v1/namespaces/default/configmaps?limit=500&resourceVersion=0: net/http: TLS handshake timeout

E0707 07:57:28.674091       1 reflector.go:134] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: Failed to list *v1.Service: Get https://10.11.240.1:443/api/v1/namespaces/default/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout

E0707 07:57:28.675022       1 reflector.go:134] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: Failed to list *v1.Deployment: Get https://10.11.240.1:443/apis/apps/v1/namespaces/default/deployments?limit=500&resourceVersion=0: net/http: TLS handshake timeout

E0707 07:57:28.676153       1 reflector.go:134] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: Failed to list *v1.PersistentVolumeClaim: Get 

This error shows multiple times when siddhi operator running for a long time. This error might occur due to the siddhi operator did not set up watches for those resources.

Affected Product Version:
Siddhi operator 0.1.1

OS, DB, other environment details and versions:
Google Kubernetes Engine (GKE)

Adding New Spec to the Failover Deployment

Description:
For the failover deployment of the siddhi-operator, we need a messaging layer and PV configurations to be specified in the CRD. So it is better to have specs like below.

messagingSystem: 
  type: nats
  bootstrapServers: 
    - "nats://siddhi-nats:4222"
  clusterId: siddhi-stan

persistentVolume: 
  accessModes: 
    - ReadWriteOnce
  resources:
    requests: 
      storage: 8Gi
  storageClassName: slow
  type: nats
  volumeMode: Filesystem

Notes

  • If the user specified multiple siddhi apps to be deployed in failover mode, siddhi operator will use the same PV configuration to create PVCs for each kubernetes deployment.
  • If the user only specified the messagingSystem as below siddhi operator will automatically create NATS cluster and streaming cluster and deployed the failover mode.
messagingSystem: 
  type: nats
  • Providing massagingSystem as above is an indication to the siddhi-operator to deploy siddhi apps in failover mode(HA mode).
  • Also the failover mode is really similar to distributed deployment mode in siddhi with replication factor = 1 in all partial apps.

Suggested Labels:

Support for mounting volume mounts to Siddhi instance

Description:
Let's assume, I have a resource file which I need to use with siddhi-io-file source.
At the moment, we cannot mount this file to the Siddhi Runner instance. We would have to create a custom docker image with the resource file already bundled into the docker image.
Even in this case, if we need to update the resource file, we would have to rebuild the docker image.

This usecase is valid for all cases where we need to use resource files.(siddhi-io-file, siddhi-execution-tensorflow, siddhi-gpl-execution-pmml)

Changing messaging system specs to camel case

Description:
In the previous 0.2.0-m1 release, the messaging system spec had some specs which follow dot naming convention.

  messagingSystem:
    type: nats
    config: 
      bootstrap.servers: 
        - "nats://siddhi-nats:4222"
      cluster.id: siddhi-stan

This should change like below.

  messagingSystem:
    type: nats
    config: 
      bootstrapServers: 
        - "nats://nats-siddhi:4222"
      clusterID: stan-siddhi

Suggested Labels:
type/improvement

Parser set persistenceEnabled=false when there is not a messaging system

Description:
I parse a Siddhi app which has a window, without specifying a messaging system to the Siddhi parser.

{
	"siddhiApps":["@App:name(\"PowerConsumptionSurgeDetection\")\n@App:description(\"App consumes events from HTTP as a JSON message of { 'deviceType': 'dryer', 'power': 6000 } format and inserts the events into DevicePowerStream, and alerts the user if the power consumption in 1 minute is greater than or equal to 10000W by printing a message in the log for every 30 seconds.\")\n\n/*\n    Input: deviceType string and powerConsuption int(Joules)\n    Output: Alert user from printing a log, if there is a power surge in the dryer within 1 minute period. \n            Notify the user in every 30 seconds when total power consumption is greater than or equal to 10000W in 1 minute time period.\n*/\n\n@source(\n    type='http',\n    receiver.url='${RECEIVER_URL}',\n    basic.auth.enabled='false',\n    @map(type='json')\n)\ndefine stream DevicePowerStream(deviceType string, power int);\n\n@sink(type='log', prefix='LOGGER') \ndefine stream PowerSurgeAlertStream(deviceType string, powerConsumed long); \n\n@info(name='surge-detector')  \nfrom DevicePowerStream#window.time(1 min) \nselect deviceType, sum(power) as powerConsumed\ngroup by deviceType\nhaving powerConsumed > 10000\noutput every 30 sec\ninsert into PowerSurgeAlertStream;"],
	"propertyMap":{"RECEIVER_URL":"http://0.0.0.0:8080/checkPower"}
}

For this request Siddhi parser only returns "persistenceEnabled=false" like below.

[
    {
        "persistenceEnabled": false,
        "replicas": 1,
        "siddhiApp": "@App:name(\"PowerConsumptionSurgeDetection\")\n@App:description(\"App consumes events from HTTP as a JSON message of { 'deviceType': 'dryer', 'power': 6000 } format and inserts the events into DevicePowerStream, and alerts the user if the power consumption in 1 minute is greater than or equal to 10000W by printing a message in the log for every 30 seconds.\")\n\n/*\n    Input: deviceType string and powerConsuption int(Joules)\n    Output: Alert user from printing a log, if there is a power surge in the dryer within 1 minute period. \n            Notify the user in every 30 seconds when total power consumption is greater than or equal to 10000W in 1 minute time period.\n*/\n\n@source(\n    type='http',\n    receiver.url='http://0.0.0.0:8080/checkPower',\n    basic.auth.enabled='false',\n    @map(type='json')\n)\ndefine stream DevicePowerStream(deviceType string, power int);\n\n@sink(type='log', prefix='LOGGER') \ndefine stream PowerSurgeAlertStream(deviceType string, powerConsumed long); \n\n@info(name='surge-detector')  \nfrom DevicePowerStream#window.time(1 min) \nselect deviceType, sum(power) as powerConsumed\ngroup by deviceType\nhaving powerConsumed > 10000\noutput every 30 sec\ninsert into PowerSurgeAlertStream;",
        "sourceDeploymentConfigs": [
            {
                "serviceProtocol": "TCP",
                "secured": false,
                "port": 8080,
                "isPulling": false,
                "deploymentProperties": {}
            }
        ]
    }
]

Since there is a window, the persistence enable should be true.

Suggested Labels:
type/fix

Affected Product Version:
0.2.0-m1

Events are potential to loss when NATS becomes unavailable in the Distributed mode

Description:
The current deployments of NATS that we are automatically creating through Siddhi operator is a basic NATS deployment with a NATS cluster and a Streaming cluster. In distributed deployment of Siddhi, we have used NATS as our primary messaging system that enables communication among each Siddhi app. If NATS becomes unavailable for a while there can be scenarios where some user events can be lost.

According to the above arguments, it is much better if we can provide HA deployment of NATS by the Siddhi operator in the automatic NATS deployment phase.

Suggested Labels:
Feature improvement

Affected Product Version:
0.2.0-beta

Steps to reproduce:

  1. Deploy stateful Siddhi app in default distributed manner using Siddhi operator
  2. Send a sequence of events to NATS using a NATS client
  3. Manually down the NATS streaming cluster created in your K8s cluster.
  4. You will see some of the events getting loss.

Automatic KAFKA creation for siddhi

Description:
As described in the issue #24 the failover deployment should depends on with NATS or KAFKA. It would be better if siddhi operator can create relevant KAFKA clusters when the user did not specify the configurations of a messaging system.

Suggested Labels:
type/new-feature

App not deployed and Siddhi-Operator goes down if @App:name contains single quotes

Description:
If the @app:name annotation in SiddhiProcess YAML file contains single quotes, deployment is not created for the app and the siddhi-operator goes down. siddhi-operator pod is restarted again and again, but it doesn't come back to normal.

Example: @App:name('ShipmentHistoryApp')

Affected Product Version:
siddhi-operator 0.1.1

OS, DB, other environment details and versions:
Minikube v1.1.1
kubectl v1.14.3

[Doc] Create a doc about how to deploy Siddhi apps in openshift using Siddhi operator

Description:

OpenShift is a product that has been used by many enterprise systems as their container orchestration framework. It has similar features as Kubernetes with some extended ones. Some of the OpenShift concepts and commands are different from K8s. Therefore it would be much better to have a proper test with OpenShift and documentation related to that deployments.

Suggested Labels:

Documentation improvement

Test Siddhi app updates with zero downtime

Description:
Test and document the behavior during Siddhi app updates with zero downtime.
Especially focusing on ingresses and state.

Affected Product Version:
0.2.0-m1

By default service creation issue for non-HTTP siddhi apps

Description:
I was trying to deploy a non-HTTP siddhi application using siddhi operator version 0.1.0.

  @App:name("NatsSource")
  @App:description("Description of the plan")

  -- Please refer to https://docs.wso2.com/display/SP400/Quick+Start+Guide on getting started with SP editor. 

  @source(
      type='nats', 
      destination='${NATS_DEST}', 
      bootstrap.servers='${NATS_URL}', 
      @map(type='json'),
      cluster.id='${NATS_CLUSTER_ID}'
  )
  define stream inputStream (name string);

  @sink(type='log', prefix='LOGGER')
  define stream MonitorDevicesPowerStream(name string);

  @info(name = 'query1') 
  from inputStream 
  select *  
  insert into MonitorDevicesPowerStream

But siddhi operator throuw an error like below.

{"level":"error","ts":1557916070.7782555,"logger":"controller_siddhiprocess","msg":"Failed to create new Service","Request.Namespace":"default","Request.Name":"monitor-app1","Service.Namespace":"default","Service.Name":"monitor-app1","error":"Service \"monitor-app1\" is invalid: spec.ports: Required value","stacktrace":"github.com/siddhi-io/siddhi-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/siddhi-io/siddhi-operator/pkg/controller/siddhiprocess.(*ReconcileSiddhiProcess).Reconcile\n\t/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/pkg/controller/siddhiprocess/siddhiprocess_controller.go:196\ngithub.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1557916070.7799008,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"siddhiprocess-controller","request":"default/monitor-app1","error":"Service \"monitor-app1\" is invalid: spec.ports: Required value","stacktrace":"github.com/siddhi-io/siddhi-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\ngithub.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

The reason for this issue was siddhi operator always trying to create a service for all deployments without considering the type of the siddhi app is HTTP or not.

Affected Product Version:
Version 0.1.0

There is no way to view the partial apps after a distributed deployment

Description:
In the default distributed deployment of SiddhiProcesses, we are dividing user given app to partial Siddhi apps and deploy those apps. The deployment and partial Siddhi app have one to one matching. After successful deployment users do not have a way to see the partial Siddhi apps that already deployed directly. To do that users have to follow the following steps.

  1. Describe the pod
  2. Find the config map that used to deploy apps
  3. Describe the config map and find the deployed partial app

This process might take time and it is not very user-friendly.

Suggested Labels:
New features

Affected Product Version:
0.2.0

SiddhiProcess consumes more time to deploy due to the dynamic parser implementation

Description:
Before 0.2.0-m2 release Siddhi operator used a static parser which deployed as a prerequisite. If a user has a custom extension that static parser would fail. Therefore in 0.2.0-m2 contained a dynamic parser which used in the runtime. That dynamic parser is nothing but another Siddhi runner deployment with extra parameters. The dynamic parser works as follow.

  1. Deploy Siddhi runner image with an external service
  2. Invoke the parser and parse the Siddhi apps
  3. Destroy the deployments and services

In previous static parser did only the parsing functionality. Therefore this dynamic parser takes more time to parse a Siddhi app. Hence the SiddhiProcess deployments also delayed a bit compared to previous implementations.

Suggested Labels:
Feature improvement

Affected Product Version:
0.2.0-m2

Versioning in siddhi application level

Description:
Let say user first deploy a siddhi app like below using the siddhi operator.

@App:name("MonitorApp")
@App:description("Description of the plan") 
@source(
    type='http',
    receiver.url='${RECEIVER_URL}',
    basic.auth.enabled='${BASIC_AUTH_ENABLED}',
    @map(type='json')
)
define stream DevicePowerStream (type string, deviceID string, power int);
@sink(type='log', prefix='LOGGER')
define stream MonitorDevicesPowerStream(sumPower long);
@info(name='monitored-filter')
from DevicePowerStream#window.time(100 min)
select sum(power) as sumPower
insert all events into MonitorDevicesPowerStream;

Then user change the siddhi app, for example change sum to avg and trying to redeploy it using kubectl.

Siddhi operator uses internal struct called SPContainer to maintain an internal state of the operator and parser interactions. Thus these kinds of minor changes in siddhi app level cannot be captured using the operator.

To achieve this kind of versioning in the custom resource object level we can use one of the following approaches.

  1. Using a hash value of the siddhi app. Reconcile loop always check the changes in the hash value.
  2. We can add new version entry under the CRD spec or add it as an annotation.

Suggested Labels:
type/improvement

Affected Product Version:
0.1.0 - 0.2.0-m1

Make SiddhiProcess status running when pods become available

Description:
Currently, SiddhiProcesses comes to the Running state when all the deployments and services up and running. But after the creation of deployment, it takes some extra time to becomes pods available.

It is better if SiddhiProcesses comes to a Running state after pods become available.

Suggested Labels:
Feature improvement

Affected Product Version:
0.2.0-m2

[Kubernetes Deployment] Couldn't send events to siddhi app using minikube IP

Description:
I tried the guide in https://siddhi-io.github.io/siddhi/documentation/siddhi-5.x/siddhi-as-a-kubernetes-microservice-5.x/ using minikube and deployed the monitor-app successfully.

After adding the host siddhi and minikube IP to the /etc/hosts file, I was able to sent events to monitor-app using a curl as follows.

curl -X POST
https://siddhi/monitor-app/8280/example
-H 'Content-Type: application/json'
-d '{
"type": "monitored",
"deviceID": "001",
"power": 341
}' -k

But when I directly used minikube IP in the CURL, I couldn't send events.

I guess this is because the type of the service is ClusterIP and not NodePort.

Can I get some explanation about this?
Shall we mention clearly in the doc that we can use only siddhi as the host?

Affected Version: 0.1.1

Change CRD Siddhi App Retrieving Spec

Description:
Previously siddhi operator had two specs to retrieve siddhi apps as config maps or direct string as follow.

  apps:
    - app1
    - app2
  query: |
    @App:name("MonitorApp")
    @App:description("Description of the plan") 
    
    @sink(type='log', prefix='LOGGER')
    @source(type='http', receiver.url='${RECEIVER_URL}', basic.auth.enabled='${BASIC_AUTH_ENABLED}', @map(type='json'))
    define stream DevicePowerStream (type string, deviceID string, power int);
    
    define stream MonitorDevicesPowerStream(deviceID string, power int);
    @info(name='monitored-filter')
    from DevicePowerStream[type == 'monitored']
    select deviceID, power
    insert into MonitorDevicesPowerStream;

But having two separate entries for deploy siddhi apps might misleading to a user. Therefore it is better to aggregate these two into a single spec would be a better implementation practice.

  apps:
    - configMap: app1
    - configMap: app2
    - script: |
         @App:name("MonitorApp")
         @App:description("Description of the plan")

         @sink(type='log', prefix='LOGGER')
         @source(
             type='http',
             receiver.url='http://0.0.0.0:8080/example',
             basic.auth.enabled='false',
             @map(type='json')
         
         define stream DevicePowerStream (type string, deviceID string, power int);

         @sink(type='log', prefix='LOGGER')
         define stream MonitorDevicesPowerStream(sumPower long);
         @info(name='monitored-filter')
         from DevicePowerStream#window.time(100 min)
         select sum(power) as sumPower
         insert all events into MonitorDevicesPowerStream;

Since this change will affect to the previous release siddhi operator needs to upgrade the apiVersion.

Suggested Labels:
type/improvement

Using ConfigMap for Operator Deployment Configs

Description:
Previously the operator level configurations read as environment variables in the operator deployment. It is better to add a config map to read those configurations instead of using environment variables.

The configurations that read from that config map are,

  • siddhiRunnerImage
  • autoIngressCreation
  • siddhiRunnerHome
  • siddhiRunnerImageSecret

Suggested Labels:
type/improvement

Documentation Improvements

Description:
There are some areas that we should cover in the documentation, such as.

  • Ingress creation for TCP endpoints
  • Port forwarding doc for quick testing purposes

Change PV and messaging system configs

Description:
Previously, we have to specify the persistent volume configuration as below in SiddhiProcess.

  persistentVolume: 
    accessModes: 
      - ReadWriteOnce
    resources: 
      requests: 
        storage: 1Gi
    storageClassName: standard
    volumeMode: Filesystem

Actually, these configs will be used to create the PVC inside the Siddhi operator. Hence, it should change as below.

  persistentVolumeClaim: 
    accessModes: 
      - ReadWriteOnce
    resources: 
      requests: 
        storage: 1Gi
    storageClassName: standard
    volumeMode: Filesystem

Also in the messaging system, we get the cluster-ID as an input. That cluster-ID should be the cluster-ID of the streaming cluster. Since NATS have two cluster types(NATS cluster and streaming cluster) it would be better to differentiate in the spec itself like below.

  messagingSystem:
    type: nats
    config: 
      bootstrapServers: 
        - "nats://nats-siddhi:4222"
      streamingClusterId: stan-siddhi

Suggested Labels:
Feature improvement

Affected Product Version:
0.2.0-m2

Getting segmentation fault error when creating PVC automatically

Description:
When creating PVC automatically, if the user did not specify the storage class name operator gives segmentation fault issue like below.

E0829 17:39:49.811061       1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
src/github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
src/github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
src/github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/bin/go/src/runtime/panic.go:522
/usr/local/bin/go/src/runtime/panic.go:82
/usr/local/bin/go/src/runtime/signal_unix.go:390
src/github.com/siddhi-io/siddhi-operator/pkg/controller/siddhiprocess/deploymanager/application.go:111
src/github.com/siddhi-io/siddhi-operator/pkg/controller/siddhiprocess/siddhicontroller/siddhicontroller.go:138
src/github.com/siddhi-io/siddhi-operator/pkg/controller/siddhiprocess/siddhiprocess_controller.go:249
src/github.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215
src/github.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158
src/github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
src/github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
src/github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/usr/local/bin/go/src/runtime/asm_amd64.s:1337
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x10192b3]

goroutine 351 [running]:
github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	src/github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x105
panic(0x127dda0, 0x21c4b20)
	/usr/local/bin/go/src/runtime/panic.go:522 +0x1b5
github.com/siddhi-io/siddhi-operator/pkg/controller/siddhiprocess/deploymanager.(*DeployManager).Deploy(0xc0009e7928, 0xc000a9ca89, 0x7, 0xc000d24440, 0x13)
	src/github.com/siddhi-io/siddhi-operator/pkg/controller/siddhiprocess/deploymanager/application.go:116 +0x1f43
github.com/siddhi-io/siddhi-operator/pkg/controller/siddhiprocess/siddhicontroller.(*SiddhiController).CreateArtifacts(0xc0009e7bc0, 0xc000317110, 0x2, 0x2)
	src/github.com/siddhi-io/siddhi-operator/pkg/controller/siddhiprocess/siddhicontroller/siddhicontroller.go:138 +0x324
github.com/siddhi-io/siddhi-operator/pkg/controller/siddhiprocess.(*ReconcileSiddhiProcess).Reconcile(0xc0004acf00, 0xc0005444c8, 0x7, 0xc0005da920, 0x11, 0x21d8c80, 0x8, 0x200, 0x40)
	src/github.com/siddhi-io/siddhi-operator/pkg/controller/siddhiprocess/siddhiprocess_controller.go:249 +0x87d
github.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00029c5a0, 0x0)
	src/github.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215 +0x1cc
github.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1()
	src/github.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158 +0x36
github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000482410)
	src/github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000482410, 0x3b9aca00, 0x0, 0x1, 0xc000772480)
	src/github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xf8
github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000482410, 0x3b9aca00, 0xc000772480)
	src/github.com/siddhi-io/siddhi-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start
	src/github.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157 +0x311

Suggested Labels:
Type fix

Affected Product Version:
0.2.0-m2

Steps to reproduce:

  • Create a stateful distributed app without PVC spec's storage class.

Cannot deploy WSO2 Streaming Integrator image with different path

Description:
When deploying Streaming integrator, we observed a difference in the internal folder structure.
Difference between siddhi runner paths and Streaming Integrator as follows.
execution file name -
runner.sh ----------- streaming-integrator.sh

Home path -
/home/siddhi_user/siddhi-runner/ -------------- /home/wso2carbon/wso2-streaming-integrator/

File persistence location -
wso2/runner/siddhi-app-persistence ----- wso2/server/siddhi-app-persistence

Affected Product Version:
0.2.0-m1
OS, DB, other environment details and versions:

Steps to reproduce:
streaming integrator image - anugayan/streaming-integrator

Operator crashes the when NATS unavailable in the cluster

Description:
In the distributed deployment of the Siddhi operator installing NATS is a prerequisite. Thus, Siddhi operator always tries to watch NatsCluster and NatsStreamingCluster resources. If those resources are not there then the operator throws an error. Due to this error throw, the Siddhi operator tends to crash unexpectedly. The error log shows as below.

no matches for kind "NatsCluster" in version "nats.io/v1alpha2" 
github.com/siddhi-io/siddhi-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error
	/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/vendor/github.com/go-logr/zapr/zapr.go:128
github.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start
	/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:89
github.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Watch
	/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:122
github.com/siddhi-io/siddhi-operator/pkg/controller/siddhiprocess.add
	/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/pkg/controller/siddhiprocess/siddhiprocess_controller.go:136
github.com/siddhi-io/siddhi-operator/pkg/controller/siddhiprocess.Add
	/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/pkg/controller/siddhiprocess/siddhiprocess_controller.go:55
github.com/siddhi-io/siddhi-operator/pkg/controller.AddToManager
	/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/pkg/controller/controller.go:31
main.main
	/Users/buddhi/Documents/MyWork/Go/src/github.com/siddhi-io/siddhi-operator/cmd/manager/main.go:131
runtime.main
	/usr/local/bin/go/src/runtime/pr

The error log is identical to NatsStreamingCluster too.

Suggested Labels:
fix

Affected Product Version:
0.2.0-m1

Steps to reproduce:
Install the operator without NATS

Change CRD Pod Spec

Description:

Previously in the 0.1.1 version siddhi operator had a spec called pod to retrieve docker image information like below.

pod: 
  image: siddhiio/siddhi-runner-alpine
  imagePullSecret: siddhi-secret
  imageTag: "0.1.0"

Since this specifications used to create the container it is better to have default k8s container spec instead of using pod spec like below.

container: 
  env: 
    - 
      name: RECEIVER_URL
      value: "http://0.0.0.0:8080/example"
    - 
      name: BASIC_AUTH_ENABLED
      value: "false"
    - 
      name: NATS_URL
      value: "nats://siddhi-nats:4222"
    - 
      name: NATS_DEST
      value: siddhi
    - 
      name: NATS_CLUSTER_ID
      value: siddhi-stan
  image: "buddhiwathsala/siddhi-runner:0.1.1"

But this container spec does not contain imagePullSecret spec we need to have a separate spec to get image pull secret.

Suggested Labels:

Improvement

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤ī¸ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.