Giter Club home page Giter Club logo

core's Introduction

Serverless functions and edge microservices made painless

Getting Started | Documentation | Contributing | License


Project Flogo is an open source framework to simplify building efficient & modern serverless functions and edge microservices and this repository is the core library used to create and extend those Flogo Applications.

Flogo Core

Flogo Core is the core flogo library which contains the apis to create and extend Flogo applications.

Getting started

If you want to get started with Project Flogo, you should install the install the Flogo CLI. You can find details there on creating a quick sample application. You also might want to check out the getting started guide in our docs or check out the Labs section in our docs for in depth tutorials.

Documentation

Here is some documentation to help you get started understanding some of the fundamentals of the Flogo Core library.

  • Model: The Flogo application model
  • Data Types: The Flogo data types
  • Mapping: Mapping data in Flogo applications

In addition to low-level APIs used to support and run Flogo applications, the Core library contains some high-level APIs. There is an API that can be used to programmatically create and run an application. There are also interfaces that can be implemented to create your own Flogo contributions, such as Triggers and Activities.

  • Application: API to build and execute a Flogo application
  • Contributions: APIs and interfaces for Flogo contribution development

Contributing

Want to contribute to Project Flogo? We've made it easy, all you need to do is fork the repository you intend to contribute to, make your changes and create a Pull Request! Once the pull request has been created, you'll be prompted to sign the CLA (Contributor License Agreement) online.

Not sure where to start? No problem, you can browse the Project Flogo repos and look for issues tagged kind/help-wanted or good first issue. To make this even easier, we've added the links right here too!

Another great way to contribute to Project Flogo is to check flogo-contrib. That repository contains some basic contributions, such as activities, triggers, etc. Perhaps there is something missing? Create a new activity or trigger or fix a bug in an existing activity or trigger.

If you have any questions, feel free to post an issue and tag it as a question, email [email protected] or chat with the team and community:

  • The project-flogo/Lobby Gitter channel should be used for general discussions, start here for all things Flogo!
  • The project-flogo/developers Gitter channel should be used for developer/contributor focused conversations.

For additional details, refer to the Contribution Guidelines.

License

Flogo source code in this repository is under a BSD-style license, refer to LICENSE

core's People

Contributors

abhide-tibco avatar abhijitwakchaure avatar awakchau-tibco avatar debovema avatar futurechallenger avatar jdattatr-tibco avatar lixingwang avatar mellistibco avatar milanbhagwat avatar pointlander avatar skothari-tibco avatar vanilcha-tibco avatar vijaynalawade avatar vnalawad-tibco avatar yxuco avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

core's Issues

Configure event queue size from env var

Current behavior (how does the issue manifest):
Today we hardcode event queue size to 100 which is not good. For high loading requests, the 100 queues might be the bottleneck.
Expected behavior:
We should able to configure the value from env vars.

Minimal steps to reproduce the problem (not required if feature enhancement):

Please tell us about your environment (Operating system, docker version, browser & web ui version, etc):

Flogo version (CLI & contrib/lib. If unknown, leave empty or state unknown): 0.X.X

Additional information you deem important (e.g. issue happens only occasionally):

problem using array mapping in flow input/output mapping

Current behavior (how does the issue manifest):
Today we use $.xxx to get the current scope data, for example, we get the current array loop element by using $.elementName.

We are also using $. to get current scope data for the trigger to flow or flow to trigger mapping.

There will be an issue when using array mapping in flow input/output mapping, it's difficult to know $.xxx is getting from array scope or trigger scope.
Expected behavior:

Need a better way to handle array mapping in trigger output mapping or flow output mapping.

Minimal steps to reproduce the problem (not required if feature enhancement):

Please tell us about your environment (Operating system, docker version, browser & web ui version, etc):

Flogo version (CLI & contrib/lib. If unknown, leave empty or state unknown): 0.X.X

Additional information you deem important (e.g. issue happens only occasionally):

Support datetime type for compare expression

Current behavior:

We can't use below to compare with DateTime type
==
>
<
>=
<=
!=
Expected behavior:

Able to compare DateTime type data with above compare operator

What is the motivation / use case for changing the behavior?

Additional information you deem important (e.g. I need this tomorrow):

Installing test packages

go version go1.13.1 darwin/amd64

I'm following the tutorial for creating my own activity and get the following error when trying to in install the test package:

 go get github.com/project-flogo/core/support/test                            
# github.com/project-flogo/core/support/test
../../../go/src/github.com/project-flogo/core/support/test/trigger.go:33:21: not enough arguments in call to tConfig.FixUp
	have (*trigger.Metadata)
	want (*trigger.Metadata, resolve.CompositeResolver)

If I ignore the test for now and try to install my activity as per the tutorial, it tells me that I have a missing flogo.json file.

flogo install github.com/carlskii/myNewActivity                                                                          ✔  at 
Error validating project: not a valid flogo app project directory, missing flogo.json

Datatype is not preserved for overridden property values

Current behavior (how does the issue manifest):
When app property value is overridden using ENV or JSON mechanism, type casting is not performed. Due to this, app property of type integer can be set to string value and vice-versa.
Expected behavior:
Type casting must be performed to ensure type safety.

Minimal steps to reproduce the problem (not required if feature enhancement):
Set app property of type int and print its value using Log activity. Now, override value with string value. The Log activity will print string value.
Please tell us about your environment (Operating system, docker version, browser & web ui version, etc):

Flogo version (CLI & contrib/lib. If unknown, leave empty or state unknown): 0.X.X
0.9.x
Additional information you deem important (e.g. issue happens only occasionally):

Proposal for condition mapping

Current behavior:
Today we have below mapping type

  • Straight mapping
  • Object mapping
  • Array mapping(part of object mapping)


It would be an extremely great feature if we had a smart mapper that can do complex things., or the mapping can be determined by some conditional expression, such as: meet some condition mapping from A otherwise mapping from B. 


Expected behavior:
I considering adding if/else conditional mapping, doing mapper base on conditions.

Use case
It send the request to different publishers to deliver the book to the book store, The request data send to the publisher must have a URL, book name, etc…

If ($flow.book.count < 100 && $flow.book.publisher == “publishA”) 
{
       
	url:= “http://publisherA/requestBook”,
	“bookname”: “=$flow.book.name”
}
Else if ($flow.book.publisher ==“publisherB”){

	url:= “http://piblisherB/reqestbook,
	“bookname”: “=$flow.book.name”
}



This is simple use cases and users might have more complex mapping base on different conditions. So far we cannot achieve it using today’s mapping.


So I come up with if/else condition mapping proposal in mapper. here are some if/else proposal examples

{
  "bookRequest": {
    "@if( $flow.book.count < 100 && $flow.book.publisher == \"publishA\")": {
      "url": "http://publisherA/requestBook",
      "bookname": "=$flow.book.name"
    },
    "@elseIf($flow.book.count < 100 && $flow.book.publisher == \"publishA\")": {
      "url": "http://piblisherB/reqestbook",
      "bookname": "=$flow.book.name"
    },
    "@else": {
      "url": "http://piblisherC/reqestbook",
      "bookname": "=$flow.book.name"
    }
  }
}

@if with foreach loop(array mapping), do foreach book when only method equals POST


{
  "books": {
    "mapping": {
      "books": {
        "@if($flow.method == \"POST\")": {
          "@foreach($.books, index, $loop.title == \"IOS\")": {
            "title": "=tstring.concat(\"title is \", $loop.title)",
            "isbn": "=$loop.isbn",
            "status": "=$loop.status",
            "categories": "=$loop.categories"
          }
        }
      }
    }
  }
}

@if with primitive field mapping

{
  "@if($flow.abc == \"bac\")": "=$activity[xxx].abc",
  "elseIf($flow.abc == \"dddd\"))": "$activity[xxx].abc",
  "else": "=$activity[xxx].other"
}

@mellistibco @fm-tibco @vijaynalawade: Please share your thoughts on this.

Support for special OnStartup/OnShutdown triggers to let app developers initialize/cleanup data using actions

Current behavior:
Today, there is no way to configure action that would execute before regular triggers start or after regular triggers are stopped.
Expected behavior:
Way to execute action(s) before regular trigger starts and after regular triggers are gracefully stopped.
What is the motivation / use case for changing the behavior?
In some cases, users might want to initialize or cleanup things before triggers are started or after triggers are stopped respectively. e.g. create tables in database
Additional information you deem important (e.g. I need this tomorrow):

Get trigger status

Current behavior:
This is no way to collect all trigger status
Expected behavior:
Should expose engine api to get all trigger status. just like old: flogoEngine.TriggerInfos()
What is the motivation / use case for changing the behavior?

Additional information you deem important (e.g. I need this tomorrow):

Smart engine - Ability to start/stop triggers based on engine load

Current behavior:
When runner type is set to POOLED, trigger goroutines are blocked when runner queue is full.
Expected behavior:
Some mechanism where app triggers can be temporarily stopped when queue is full and started back when queue is relatively empty.
What is the motivation / use case for changing the behavior?
This feature will help control event flow when actions are slow in execution.
Additional information you deem important (e.g. I need this tomorrow):

coerce.ToType should support all go primitive types

Current behavior (how does the issue manifest):
only a subset of primitive types are supported, especially, not including uint* types
Expected behavior:
support uint,int8,uint8,int16,uint16,uint32,uint64 etc
Minimal steps to reproduce the problem (not required if feature enhancement):

Please tell us about your environment (Operating system, docker version, browser & web ui version, etc):

Flogo version (CLI & contrib/lib. If unknown, leave empty or state unknown): 0.X.X
0.9.2
Additional information you deem important (e.g. issue happens only occasionally):

Not able to add additional zapcores to flogo logger

Current behavior:
No provision to add additional zapcore (like syslog zap core implementation) to flogo logger

Expected behavior:
Provision to add additional zapcores into flogo root logger.

What is the motivation / use case for changing the behavior?
Would like to add syslog support to the flogo logger.

Additional information you deem important (e.g. I need this tomorrow):
Couldn't able to inject syslog zap core into flogo logger with the flogo provided API, I ended up in refactoring flogo/core logger related files and same is available in my fork.

add /metrics endpoint for timeseries database integration

Current behavior:
Found project-flogo/services have flow-state and flow-store. Based on my understanding, we need to call api in order populate data into these two service. For flogo-state, i am expecting this will integrate with timeseries database likei InfluxDB or Prometheus.

Expected behavior:
Should implmenet /metrics endpoint for each flogo app by default. The metrics should show simple success or failure counts for triggers and each activities in flow or stream.

What is the motivation / use case for changing the behavior?
Developer can focus on code and SRE can just set up scrape or poll for statistics collection without coding effort

If flow-state server goes down, all apps depends on flow-state server will lost all statistics.

Additional information you deem important (e.g. I need this tomorrow):

keep schema consistence for triggers vs activities

Current behavior:
Now we have schemas json node at activities for input/output schemas. such as:

{
   "schemas":{
      "input":{
         "field1":{
            "type":"json",
            "value":"xxxx"
         }
      },
      "output":{
         "field1":{
            "type":"json",
            "value":"xxxx"
         }
      }
   }
}

But for triggers now we are using outputSchemas node. For some of triggers which has reply section which also required schemas.

Expected behavior:
Changes trigger schemas to the same structure with activities, such as:

"schemas":{
    "output":{
       "header":{
          "type":"json",
          "value":"xxxx"
       }
    },
    "reply":{
       "data":{
          "type":"json",
          "value":"xxxx"
       }
    }
 }

What is the motivation / use case for changing the behavior?

Additional information you deem important (e.g. I need this tomorrow):

Error in resolving activity output

There is an error while resolving the output of one activity in another. I'm attaching the sample json file to recreate the issue.

{
  "name": "restApp2",
  "type": "flogo:app",
  "version": "0.0.1",
  "description": "My flogo application description",
  "appModel": "1.0.0",
  "imports": [
    "github.com/project-flogo/contrib/activity/log",
    "github.com/project-flogo/contrib/trigger/rest",
    "github.com/project-flogo/contrib/activity/counter",
    "github.com/project-flogo/flow"
  ],
  "triggers": [
    {
      "id": "my_rest_trigger",
      "type": "rest",
      "settings": {
        "port": "8888"
      },
      "handlers": [
        {
          "settings": {
            "method": "GET",
            "path": "/test/:val"
          },
          "actions": [
            {
              "type": "flow",
              "settings": {
                "flowURI": "res://flow:simple_flow"
              },
              "input": {
                "in": "=$.pathParams.val"
              }
	    }
	  ]
        }
      ]
    }
  ],
  "resources": [
    {
      "id": "flow:simple_flow",
      "data": {
        "name": "simple_flow",
        "metadata": {
          "input": [
            { "name": "in", "type": "string",  "value": "test" }
          ],
          "output": [
            { "name": "out", "type": "string" }
          ]
        },
        "tasks": [
          {
            "id": "counter",
            "name": "Counter ",
            "activity": {
              "type": "counter",
              "settings": {
                  "counterName":"test",
                   "op":"increment"
                }
            }
          },
          {
            "id": "log",
            "name": "Log Message",
            "activity": {
              "type": "log",
              "input": {
                "message": "=$activity[counter].value",
                "flowInfo": "false",
                "addToFlow": "false"
              }
            }
          }
        ],
        "links": [
          {
            "from":"counter",
            "to":"log"
          }
        ]
      }
    }
  ]
}

Support custom log message separator

Current behavior:
Today we use the zap logger's default separator (tab)
Expected behavior:
As zap already supports custom separator so we should also expose it to let the user control it.
refer to uber-go/zap#697
What is the motivation / use case for changing the behavior?

Additional information you deem important (e.g. I need this tomorrow):

Improve secrets management

TL;DR : The current way of managing secrets should be improved.

Current situation

Currently, the app descriptor (flogo.json) is pre-processed during engine startup to replace all occurrences of "SECRET:xxx" by the decrypted value of xxx based on AES-256 with a secret decryption key provided by an environment variable FLOGO_DATA_SECRET_KEY.

See line

re := regexp.MustCompile("SECRET:[^\\\\\"]*")
and following for reference.

Default value of FLOGO_DATA_SECRET_KEY encryption key is flogo. If this default value is not changed, all secret values can be decrypted easily, leaving the user in the illusion of security.

Moreover, where the secret values will be encoded ?

  • If it's in the Flogo UI, it might use the default value flogo as encryption key. It could certainly be changed but that would complexify the configuration of Flogo UI.
  • If it's manually, it would require tooling or tedious documentation.

Eventually, this pre-processing of app descriptor triggered by

updated, err := secret.PreProcessConfig(jsonBytes)
is in conflict with properties finalization triggered by
option := app.FinalizeProperties(enableExternalPropResolution, secret.PropertyProcessor)

Especially, secret.PropertyProcessor function at

func PropertyProcessor(properties map[string]interface{}) error {
will not replace anything since properties with values prefixed by "SECRET:" have already been replaced.
That's a pity because the property.PostProcessor approach seems the best.

Proposal

1. Simplify properties post-processing implementation

See PR #13.

With this simplification it will be easier to add new properties post-processors to manage secrets values.

2. Remove JSON pre-processing for "SECRET:*" values

See PR #14.

As said above, this pre-processing is conflicting with the actual property.PostProcessor post-processor.

3. Make the secrets "type-safe"

Considering the following properties array:

  "properties": [
    {
      "name": "backend.url",
      "type": "string",
      "value": "http://xxx"
    },
    {
      "name": "backend.username",
      "type": "string",
      "value": "user"
    },
    {
      "name": "backend.password",
      "type": "string",
      "value": "SECRET:xjOqSXAFanom8Y8NW5MCJg=="
    }
  ]

We could replace it by:

  "properties": [
    {
      "name": "backend.url",
      "type": "string",
      "value": "http://xxx"
    },
    {
      "name": "backend.username",
      "type": "string",
      "value": "user"
    },
    {
      "name": "backend.password",
      "type": "secret",
      "value": "xjOqSXAFanom8Y8NW5MCJg=="
    }
  ]

4. Create (external) secret resolvers

Following the model of external property resolvers, we could implement external secret resolvers.
Especially the two kinds of resolvers would match the properties to process based on their type defined in 3.

Smart Engine - Ability to pause/resume triggers to control incoming flow of events

Current behavior:
Today, engine will simply accept as many events as possible(Direct Runner) or block trigger go-routines when worker queue is full(Pooled Runner). This could lead to a performance issue or even loss of events due to resource constrains like CPU and Memory or blocked workers(Pooled runner). This might also lead to engine crash.
Expected behavior:
We need some policy based flow controllers who can control the incoming flow by temporarily pausing triggers and resuming them after certain low threshold is achieved.

What is the motivation / use case for changing the behavior?
This is useful in low resource environment like IoT.
Additional information you deem important (e.g. I need this tomorrow):

Controlling access data behavior

Current behavior:
Today, when we get value from the not-exist field from json data it will throw path not found error.
Expected behavior:

It's better to expose an ENV var FLOGO_RESOLVING to control the behavior
FLOGO_RESOLVING=STRICT => keep current behavior to throw an error
FLOGO_RESOLVING=RELAXED => print a warn log and skip

What is the motivation / use case for changing the behavior?

Today there is no way to check if the filed exit in json or not.
Such as $.result.data != nil ? "Data not empty" : "Data field is empty"

Additional information you deem important (e.g. I need this tomorrow):

Regression when handle empty array in array mapping

Current behavior (how does the issue manifest):
Using below mapping which returns null rather than []

{
  "field": {
    "mapping": {
      "array": []
    }
  }
}

Now returns

{
  "field": {
    "array": null
  }
}

Expected behavior:
returns

{
  "field": {
    "array": []
  }
}

Minimal steps to reproduce the problem (not required if feature enhancement):

Please tell us about your environment (Operating system, docker version, browser & web ui version, etc):

Flogo version (CLI & contrib/lib. If unknown, leave empty or state unknown): 0.X.X

Additional information you deem important (e.g. issue happens only occasionally):

External Service Registration/Discovery

Current behavior:

Expected behavior:

Add a standard way to register trigger as a service in an external registry and also to look one up in an activity for example.

Add support for a pluggable external service registration/discovery implementation.

What is the motivation / use case for changing the behavior?

Additional information you deem important (e.g. I need this tomorrow):

validation on trigger side

Current behavior:
Today we already enabled validation on activity side but the trigger not
Expected behavior:
Enable schema validation on the trigger side as well.

What is the motivation / use case for changing the behavior?
Take rest trigger as an example: some of the cases that I need to validate the body before passing to action to make sure the body is expected.
Additional information you deem important (e.g. I need this tomorrow):

Configuration Validation

Current behavior:
No explicit configuration validation via json schema

Expected behavior:

Generate json schema for flogo.json configuration and all resources. A project should have the ability to enable/disable configuration validation. If configuration is disabled, schema validation related imports should be excluded. Engine shouldn't include the validation code if it is not being used.

What is the motivation / use case for changing the behavior?

Additional information you deem important (e.g. I need this tomorrow):

We should enable configuration schema validation code loading by convention

  • if enabled add import for flogo.json validation

  • if enabled add import for each action (for their resources)
    (ex. /config/validation)

Not able to put any resource after engine initialized.

Current behavior (how does the issue manifest):
Not able to put any resource after the engine initialized.
Expected behavior:
We should expose an API from resource manager that way we can set resources in flow tester, we want to use exist manger to managed all flow resources.

Problem with Installation

When I run:
go get github.com/project-flogo/core/...

I get the following:

# github.com/project-flogo/core/api
...go\src\github.com\project-flogo\core\api\support.go:236:24: cannot use ac (type *activityContext) as type activity.Context in argument to act.Eval:
        *activityContext does not implement activity.Context (missing GetTracingContext method)
...go\src\github.com\project-flogo\core\support\test\trigger.go:33:21: not enough arguments in call to tConfig.FixUp
        have (*trigger.Metadata)
        want (*trigger.Metadata, resolve.CompositeResolver)
# github.com/project-flogo/core/examples/engine/shim
runtime.main_main·f: function main is undeclared in the main package

I am running 64-bit windows home.

Identifying overloaded engine when pooled runner is configured

Current behavior:
For pooled runner type, engine sets queue size based on FLOGO_RUNNER_QUEUE. Today, there is no way to find out that queue is full and all workers are busy. This could happen when trigger is receiving events faster and actions could not handle them. So, queue will fill up. After that, trigger event goroutines will be blocked on the queue channel until workers complete existing action and pick up new one.
Expected behavior:
At-least an API that can monitor current size of the queue and may generate alerts about the situation.
What is the motivation / use case for changing the behavior?
This can be leveraged for readiness probe in K8s. That would reduce incoming traffic to certain engine until adequate queue is available for further processing.
Additional information you deem important (e.g. I need this tomorrow):

Contributions and go.mod

Should we prescribe that each contribution have its own go.mod? For example in contrib should we have separate go.mods instead of one top-level one? What are the advantages/disadvantages of both approaches?

On thing about requiring individual ones, coding for a contrib can be more straight forward. For example as part of an activity tutorial, we can tell people to create always create go.mod, instead of having a bunch of ambiguous information about proper placement of thier go.mod depending on their repo structure. We can also create a "gen" cli plugin that can create scaffolding for a contribution that creates its go.mod as well.

I just want to get everyones thoughts and possible consensus I guess there are a couple of questions:

  1. Do individual go.mods make sense?
  2. In our current repos with multiple contribs, should we create individual go.mods
  3. In a contrib creation tutorial or gen tool, should we always create a go.mod for the individual contrib.
  4. If we think this is the way to go, should this be recommended or required behavior?

Thoughts? @mellistibco @pointlander @debovema @vijaynalawade @skothari-tibco

Enhancements for function descriptors

Current behavior:

  1. The current structure doesn't have support to mention the valueType for argument or return type if they are complex types like an array or object

  2. There is no attribute to add an example

  3. There is no way to mention the required vs optional arguments

  4. There is no way to provide the default value for an argument

Expected behavior:
To support the valueType if the type is complex like array or object and added example at the root level

{
            "name": "create",
            "description": "Create an array of *primitive types* from the given items. All items must be same primitive types e.g. string, integer etc and must match with the field type where this function is used.",
            "example": "array.create(\"A\",\"B\",\"C\")\nReturns\n[\"A\",\"B\",\"C\"]",
            "varArgs": true,
            "args": [
                {
                    "name": "item1",
                    "type": "any"
                },
                {
                    "name": "item2",
                    "type": "any"
                }
            ],
            "return": {
                "type": "array",
                "valueType": "any"
            }
 }

Adding required attribute to args to support required vs optional

"args": [
                {
                    "name": "input",
                    "required": false,
                    "type": "array",
                    "valueType": "any"
                }
 ]

Adding value under the arg to support default value

"args": [
                {
                    "name": "input",
                    "type": "any"
                },
                {
                    "name": "precision",
                    "type": "number",
                    "required": false,
                    "value": 16
                }
]

What is the motivation / use case for changing the behavior?

Additional information you deem important (e.g. I need this tomorrow):

support function in source array and filter for array mapping

Current behavior (how does the issue manifest):
Today the array foreach mapping format as below

@foreach($.authors, authorLoop, $loop.age > 45)
  1. The First argument is source array
  2. The second argument is scope name
  3. The third argument is a filter expression.

Now we cannot support function in first and third arguments
Expected behavior:

We should support function in source array and filter

Minimal steps to reproduce the problem (not required if feature enhancement):

Please tell us about your environment (Operating system, docker version, browser & web ui version, etc):

Flogo version (CLI & contrib/lib. If unknown, leave empty or state unknown): 0.X.X

Additional information you deem important (e.g. issue happens only occasionally):

Trigger and Activity Close method in interface

Current behavior:

No Close method in Activity or Trigger interface. Developer has no way to clean up connection of backend service before microservice is terminated.

Expected behavior:

What is the motivation / use case for changing the behavior?

Most of backend service is expected golang developer to implement Close method and use defer statement to make sure all connections will be closed before end of service.

Additional information you deem important (e.g. I need this tomorrow):

GetRef() in support package should remove 'vendor' path

Current behavior (how does the issue manifest):
When working with older third-party packages that uses /vendor packages, GetRef() returns ref with path including '.../vendor/'

Expected behavior:
To make it work, GetRef() should return only the ref name, excluding the prefix up to '/vendor/'.

Minimal steps to reproduce the problem (not required if feature enhancement):

Please tell us about your environment (Operating system, docker version, browser & web ui version, etc):
OSX

Flogo version (CLI & contrib/lib. If unknown, leave empty or state unknown): 0.X.X
1.1.0

Additional information you deem important (e.g. issue happens only occasionally):

Issue running API example

Trying to run the API example (https://github.com/project-flogo/examples/blob/master/api/activities/main.go) and getting the following error when the request comes in:

2018/11/02 17:10:47 http: panic serving [::1]:51612: runtime error: invalid memory address or nil pointer dereference
goroutine 34 [running]:
net/http.(*conn).serve.func1(0xc0001a8000)
	C:/go/src/net/http/server.go:1746 +0x125
panic(0x7d2060, 0xacc7b0)
	C:/go/src/runtime/panic.go:513 +0x1f4
github.com/project-flogo/core/trigger.(*handlerImpl).Handle(0xc000073320, 0x8a2120, 0xc00005e080, 0x7d3940, 0xc000073950, 0x0, 0x0, 0x0)
	C:/Users/melli/go/src/github.com/project-flogo/core/trigger/handler.go:152 +0x564
github.com/project-flogo/contrib/trigger/rest.newActionHandler.func1(0x8a1f60, 0xc0001ac000, 0xc0000f2100, 0xc000058ec0, 0x1, 0x1)
	C:/Users/melli/go/src/github.com/project-flogo/contrib/trigger/rest/trigger.go:190 +0xf08
github.com/julienschmidt/httprouter.(*Router).ServeHTTP(0xc00005a480, 0x8a1f60, 0xc0001ac000, 0xc0000f2100)
	C:/Users/melli/go/src/github.com/julienschmidt/httprouter/router.go:334 +0x269
github.com/project-flogo/contrib/trigger/rest.(*serverHandler).ServeHTTP(0xc0000736b0, 0x8a1f60, 0xc0001ac000, 0xc0000f2100)
	C:/Users/melli/go/src/github.com/project-flogo/contrib/trigger/rest/server.go:161 +0x103
net/http.serverHandler.ServeHTTP(0xc0000631e0, 0x8a1f60, 0xc0001ac000, 0xc0000f2100)
	C:/go/src/net/http/server.go:2741 +0x1f0
net/http.(*conn).serve(0xc0001a8000, 0x8a20e0, 0xc00005a680)
	C:/go/src/net/http/server.go:1847 +0x114d
created by net/http.(*Server).Serve
	C:/go/src/net/http/server.go:2851 +0x7b7
Process exiting with code: 0

coerce.Tostring adding quotes when coerce datetime type to string

Current behavior (how does the issue manifest):
Using coerce.Tostring to convert time.Time to string which adds double quotes in string.
"2020-04-27T18:08:44.62929Z"
Expected behavior:
Without double-quotes. 2020-04-27T18:08:44.62929Z
Minimal steps to reproduce the problem (not required if feature enhancement):

Please tell us about your environment (Operating system, docker version, browser & web ui version, etc):

Flogo version (CLI & contrib/lib. If unknown, leave empty or state unknown): 0.X.X

Additional information you deem important (e.g. issue happens only occasionally):

throw unclear error message when connection not been imported

Current behavior (how does the issue manifest):

Now it throws below error message when the connection not been imported to the engine

 connection ' ' not imported

It is unclear and nothing knows about what's wrong.

Expected behavior:

We should throw a meaningfully error message. such as:

connection 'PulsarConn' with ref 'github.com/project-flogo/messaging-contrib/pulsar/connection' not imported

Minimal steps to reproduce the problem (not required if feature enhancement):

Please tell us about your environment (Operating system, docker version, browser & web ui version, etc):

Flogo version (CLI & contrib/lib. If unknown, leave empty or state unknown): 0.X.X

Additional information you deem important (e.g. issue happens only occasionally):

flogo core install not working

Current behavior (how does the issue manifest):

flogo core install not working

Expected behavior:

Flogo core should install

Minimal steps to reproduce the problem (not required if feature enhancement):

git clone https://github.com/project-flogo/core.git
Cloning into 'core'...
remote: Enumerating objects: 42, done.
remote: Counting objects: 100% (42/42), done.
remote: Compressing objects: 100% (29/29), done.
remote: Total 2179 (delta 25), reused 27 (delta 13), pack-reused 2137
Receiving objects: 100% (2179/2179), 544.27 KiB | 1.44 MiB/s, done.
Resolving deltas: 100% (1384/1384), done.

cd core/
go install ./...

go: finding github.com/xeipuuv/gojsonreference latest
go: downloading github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415
go: extracting github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415
go: finding github.com/xeipuuv/gojsonpointer latest
go: downloading github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f
go: extracting github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f
github.com/project-flogo/core/examples/engine/shim
runtime.main_main·f: function main is undeclared in the main package

Please tell us about your environment (Operating system, docker version, browser & web ui version, etc):

ubuntu 18.04.2

Flogo version (CLI & contrib/lib. If unknown, leave empty or state unknown): 0.X.X

flogo --version
flogo cli version v0.9.0

Additional information you deem important (e.g. issue happens only occasionally):

none

support array mapping

Current behavior:
Now we do not have array mapping support.
Expected behavior:
Support array mapping by iterating over a source array

We have a first version after discussing over with @mellistibco @fm-tibco @vijaynalawade

"input" {
    
    "val" : {
        "a" : "=$activity[blah].out",
        "addresses": {
            "@foreach[$activity[blah].out2]":
            {
              "street"  : "=$.street",
              "zipcode" : "=$.zipcode",
              "state"   : "=$.state"
            }
        }
    }
}

addresses is an array node, which iterator over source array $activity[blah].out2.
The mapping "street" : "=$.street", indicate we get street value from source array $activity[blah].out2 and set to target addresses.street Same with zipcode and state.

BTW, the target array feilds value might also comes from literal or upstream activities's output.

"input" {
    
    "val" : {
        "a" : "=$activity[blah].out",
        "addresses": {
            "@foreach[$activity[blah].out2]":
            {
              "street"  : "=$.street",
              "zipcode" : "9999",
              "state"   : "=$activity[test].state"
            }
        }
    }
}

Unable to resolve iterator's $current in an expression

Current behavior (how does the issue manifest):
When I try to run the binary generated by following flogo.json:

{
  "name": "SampleApp",
  "type": "flogo:app",
  "version": "0.0.1",
  "appModel": "1.0.0",
  "description": "",
  "imports": [
    "github.com/project-flogo/flow",
    "github.com/project-flogo/contrib/trigger/rest",
    "github.com/project-flogo/contrib/activity/log",
    "github.com/project-flogo/contrib/function/string"
  ],
  "triggers": [
    {
      "id": "receive_http_message",
      "type": "rest",
      "name": "Receive HTTP Message",
      "description": "Simple REST Trigger",
      "settings": {
        "port": 9009
      },
      "handlers": [
        {
          "action": {
            "type": "flow",
            "data": {
              "flowURI": "res://flow:test"
            },
            "input": {},
            "output": {}
          },
          "settings": {
            "method": "GET",
            "path": "/testing"
          }
        }
      ]
    }
  ],
  "resources": [
    {
      "id": "flow:test",
      "data": {
        "name": "test",
        "description": "A sample flow",
        "tasks": [
          {
            "id": "log_2",
            "type": "iterator",
            "name": "Log",
            "description": "Logs a message",
            "settings": {
              "iterate": 20
            },
            "activity": {
              "type": "log",
              "input": {
                "addDetails": true,
                "message": "=string.concat(\"testing!! count: \", $current.iteration.key)"
              }
            }
          }
        ]
      }
    }
  ]
}

I get the following error:

path_to_workspace\flogo-experiment>flogo create -f iterator-test.json
Creating Flogo App: SampleApp
installing core
Installed trigger:   github.com/project-flogo/contrib/trigger/rest
Installed activity:  github.com/project-flogo/contrib/activity/log
Installed function:  github.com/project-flogo/contrib/function/string
Installed action:    github.com/project-flogo/flow

path_to_workspace\flogo-experiment>cd SampleApp

path_to_workspace\flogo-experiment\SampleApp>flogo build

path_to_workspace\flogo-experiment\SampleApp>cd bin

path_to_workspace\flogo-experiment\SampleApp\bin>SampleApp.exe
1550646373777602000     ERROR   [flogo] -       Failed to create engine: error unmarshalling flow: unable to compile expression 'string.concat("testing!! count: ", $current.iteration.key)': unable to find a 'current' resolver
github.com/project-flogo/core/support/log.(*zapLoggerImpl).Errorf
        %GOPATH%/pkg/mod/github.com/project-flogo/[email protected]/support/log/zap.go:65
main.main
        %GOPATH%/flogo-experiment/SampleApp/src/main.go:43
runtime.main
        C:/Go/src/runtime/proc.go:201

Expected behavior:
I should be able to access the current iteration context

Minimal steps to reproduce the problem (not required if feature enhancement):

  1. Create a flogo project from the above application JSON
  2. Build the application's binary using flogo build
  3. Try to execute the binary in the bin folder

Please tell us about your environment (Operating system, docker version, browser & web ui version, etc):
OS: Windows 7

Flogo version (CLI & contrib/lib. If unknown, leave empty or state unknown): 0.9.0-alpha4

Failed to assign literal value array using object mapping

Current behavior (how does the issue manifest):
Assume the source data is books and books have more than one author. authors are a string array. Now we want to assign this string array to another string array using below mapping

{
  "mapping": {
    "books": {
      "authors": {
        "@foreach($.books[0].authors, index)": {
          "=": "$loop"
        }
      }
    }
  }
}

Data:

{
  "books": [
    {
      "title": "Android",
      "isbn": "1933988673",
      "pageCount": 416,
      "publishedDate": {
        "$date": "2009-04-01T00:00:00.000-0700"
      },
      "status": "PUBLISH",
      "authors": [
        "W. Frank Ableson",
        "Charlie Collins",
        "Robi Sen"
      ],
      "categories": [
        "Open Source",
        "Mobile"
      ]
    }
  ]
}

Today it will fail say cannot convert string to map.
Expected behavior:
Able to handle it which is a case of assign all data.
Minimal steps to reproduce the problem (not required if feature enhancement):

Please tell us about your environment (Operating system, docker version, browser & web ui version, etc):

Flogo version (CLI & contrib/lib. If unknown, leave empty or state unknown): 0.X.X

Additional information you deem important (e.g. issue happens only occasionally):

Errors observed - Load test - Flogo App (Rest Trigger to Rest Activity)

Performed Load test on Flogo App with simple Rest Trigger and Rest Activity.
Below are the steps:
1)Install go-wrk
go get github.com/tsliwowicz/go-wrk
2)create the app with latest.json(project-flogo) and run the app(Attached the file).
3)Run the local backend node1.js server.(Used node server to reduce network latency)
node node1.js
4)Now run the below command to hit the gateway with 5 concurrent clients:
go-wrk -c 5 -d 120 http://localhost:9096/test

The below errors are seen:
dial tcp 127.0.0.1:1234 : connect: cannot assign requested address

Also, when using the old TIBCOSoftware/flogo-contrib flow and activities no error is seen for 50 concurrent clients.
Repeat the above steps with the oldFlogo.json.
go-wrk -c 50 -d 120 http://localhost:9096/test

Attached the latest.json, node1.json and oldFlogo.json.
attachments.zip

Note: Have already raised this issue in CLI, on Advice we are raising it in core.

Have issue to handle multiple mapping to create an single input value

Current behavior (how does the issue manifest):
Now the code only handles mapping field mapto as a single string, but we have the requirement to support complex mapping for COMPLEX_OBJECT field.
This more on legacy supproting. attached flogo json flogo.json.zip

Step.

  1. Define flowout as complex_object field and expected a JSON object
  2. Using return activity to map all required fields
  3. We are using multiple mapping to create an input value
              "input": {
                "mappings": [
                  {
                    "mapTo": "flowout.id",
                    "type": "literal",
                    "value": "this is id"
                  },
                  {
                    "mapTo": "flowout.name",
                    "type": "literal",
                    "value": "this is name"
                  }
                ]
              }

Expected behavior:
We should able to handle multiple mapping to create an input value
Minimal steps to reproduce the problem (not required if feature enhancement):

Please tell us about your environment (Operating system, docker version, browser & web ui version, etc):

Flogo version (CLI & contrib/lib. If unknown, leave empty or state unknown): 0.X.X

Additional information you deem important (e.g. issue happens only occasionally):

support mapping ref on array indexer

Current behavior:
Today, In order to get a specific array element, we have to use numbers 0, 1,2, 3... such as: $activity[rest].result[1].xxx, it is very useful by supporting mapping ref as array indexer. such as: $activity[rest].result[$iteration[key]].xxx
Expected behavior:
Support mapping ref as array indexer.$activity[rest].result[$iteration[key]].xxx
What is the motivation / use case for changing the behavior?
It is usefull to get specific array base on iterator key(int) when we iterator an activity/flow over an array,
Additional information you deem important (e.g. I need this tomorrow):

Provide a way to omitted value for optional field from mapping

Current behavior:

Today there is no way to omitted value for specific optional fields. Now we have env var FLOGO_MAPPING_SKIP_MISSING to skip missing field but that for the whole engine. we can not control for individual fields.

Expected behavior:

Users should able to omitted the field value or keep value to NULL when the mapper value is empty.

What is the motivation / use case for changing the behavior?

Additional information you deem important (e.g. I need this tomorrow):

Filter on array mapping

Current behavior:
Today we doing Array mapping with
@foreach(SourceArray, loopName)
Expected behavior:
It is very useful to support array mapping base on the filter. @foreach(SourceArray, loopName, filter)
For example loop array when only met the certain condition

@formeach($.result.books, "bookLoop", "$loop.author == 'James'")

  1. $.result.books=> source array
  2. bookLoop => Loop name
  3. $loop.author == 'James' => filter, iterate the book only when book author equals James

What is the motivation / use case for changing the behavior?

Additional information you deem important (e.g. I need this tomorrow):

bool expression || and &&

Current behavior (how does the issue manifest):

Here are 2 examples regarding && and ||

  • $activity[GetByCompositeKey].code == 200 && $activity[GetByCompositeKey].result[0].key != $flow.agreement.AgreementID
  • $activity[GetByCompositeKey].code == 400 || $activity[GetByCompositeKey].code == 400

Today we are going to evaluate both the right and left sides. but we don't need to

  • If left expression is false and we don't need to evaluate the right expression for &&
  • We don't need evaluate the right expression if the left expression is true for ||

Expected behavior:

  • If left expression is false and we don't need to evaluate the right expression for &&
  • We don't need evaluate the right expression if the left expression is true for ||

Minimal steps to reproduce the problem (not required if feature enhancement):

Please tell us about your environment (Operating system, docker version, browser & web ui version, etc):

Flogo version (CLI & contrib/lib. If unknown, leave empty or state unknown): 0.X.X

Additional information you deem important (e.g. issue happens only occasionally):

The error message is not clear to identify the issue when creating or starting trigger

Current behavior (how does the issue manifest):
The error message is not clear to identify the issue when creating or starting triggers, especially when there multiple triggers in-app.
Expected behavior:

Make the error message more readable for the user to identify the issue.

Minimal steps to reproduce the problem (not required if feature enhancement):

Please tell us about your environment (Operating system, docker version, browser & web ui version, etc):

Flogo version (CLI & contrib/lib. If unknown, leave empty or state unknown): 0.X.X

Additional information you deem important (e.g. issue happens only occasionally):

Supporting index in array mapper

Current behavior:

Today's array mapper does not have a way to let users use the looping index.

Expected behavior:

We should expose the loop index to the user which gives the ability to the user to know the current index of the loop.

What is the motivation / use case for changing the behavior?

Today there is no way to map a primitive array to an object array.

Additional information you deem important (e.g. I need this tomorrow):

Explicit Schema Support

Current behavior:

Expected behavior:

Ability to associate schemas with values. Ability to validate values using those schemas

What is the motivation / use case for changing the behavior?

Additional information you deem important (e.g. I need this tomorrow):
Example Activity Config:

  {
    "id": "activity_3",
    "name": "Activity 3",
    "activity": {
      "type": "act",
      "input": {
        "value1": "=$activity[a1].value1"
        "value2": "=$activity[a1].value2"
      },
      "schemas":{
        "input": {
          "value1":"schema://s1",
          "value2": {"type":"json", "value":"blah.. blah.. blah.."}
        }
      }
    }
  }

Example Schema Section Config:

"schemas":
    [
      {
        "id": "s1",
        "type":"json",
        "value":"blah.. blah.. blah.."
      }
    ]

handle string pointer to string for tostring method

Current behavior (how does the issue manifest):
Adding double quotes after convert string pointer to a string
Expected behavior:
Just string without qutoes
Minimal steps to reproduce the problem (not required if feature enhancement):

Please tell us about your environment (Operating system, docker version, browser & web ui version, etc):

Flogo version (CLI & contrib/lib. If unknown, leave empty or state unknown): 0.X.X

Additional information you deem important (e.g. issue happens only occasionally):

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.