Giter Club home page Giter Club logo

event-gateway's People

Contributors

allcontributors[bot] avatar asyncapi-bot avatar codingtenshi avatar derberg avatar doc-jones avatar khudadad414 avatar smoya avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

event-gateway's Issues

Websocket server should consume from a queue so all instances broadcast the same events

Current application exposes a websocket server that broadcasts message validation errors.
It is done using Go channels, meaning only validation errors that happened in that particular machine will be broadcasted later.

The issue comes whenever you have more than one instance of the app running (e.g. behind a LB).
Example: The client connects to the websocket server of machine A, but the message validation error happens on machine B.

Add an option to publish validation errors to another queue (Kafka topic in this first implementation) so it can be consumed by all instances at the same time.

Deploy a demo service enabling a Kafka proxy to a demo Kafka cluster

Deploying a demo application of the Event-Gateway to the cloud will help us testing the Kafka proxy by letting people start playing around and getting an idea about what it is.

Like a demo service where people can use their Kafka clients and produce demo messages so they get validated.
A 1 node Kafka cluster + the proxy app would be more than enough, we dont care at this moment about HA, etc.

A WS endpoint should be provided where users can connect and see all validation errors.

Validation errors should be published into a new Kafka topic so all proxy instances can consume those and broadcast to websocket clients.

[USECASE] AMQP-0.9

Hey guys, there seems to be not that much progress in the repository of the event gateway. As we are using RabbitMQ/AmazonMQ/AMQP-0.9 in our company I would like to get to know, if there is any plan to make to event gateway able to handle that as well?

Best regards,
Benjamin

[kafka] Match messages received on Produce requests with the described messages in the AsyncAPI doc

From Kafka documentation, this is all the info the Produce Request (Took version 8 as example):

Produce Request (Version: 8) => transactional_id acks timeout_ms [topic_data] 
  transactional_id => NULLABLE_STRING
  acks => INT16
  timeout_ms => INT32
  topic_data => name [partition_data] 
    name => STRING
    partition_data => index records 
      index => INT32
      records => RECORDS

The app also knows the broker address the request was made to, It also has the mappings between Brokers and local ports.

At a glance, I see the app could find the channel by checking the topic name on the Produce Request, match it with channels within the AsyncAPI doc with an Operation type of subscribe.

However, this solution won't scale that much since we are assuming operations and messages are a 1:1 relationship, but this is subject to change in version 3 of the spec.
Would it make sense to introduce some identifier in the message so we can validate directly looking by that identifier?

Drop support for configuring Kafka broker mappings through env vars.

Support for configuring Kafka brokers through env vars was added at the very early development stage so it was easy to test and spin up any instance of the app.

The complexity of the code, the fact that maintaining both config sources (env vars + AsyncAPI doc) for configuring Kafka broker mappings makes it harder to document and to replicate logic lead us to decide only AsyncAPI doc as source of configuration (with some exceptions).

[kafka] Modify Kafka ApiVersions response to only include supported versions of modified requests

Supporting all Kafka protocol request/response versions is a really big compromise. We can eventually achieve it but we can't ensure parity with Kafka releases.

In order to ensure Kafka clients can talk to our proxy, we should intercept the ApiVersions response so we indicate the supported versions for all the responses we modify (as per today, FindCoordinator and Metadata).

Otherwise, clients will potentially get a compatibility error from our proxy that looks like: Unsupported response schema version 11 for key 3

[UseCase]: NATS Jetsream

Please provide the name of your company and/or project.

I check but no issues for this.

Basically it would be good if we can use NATS JetStream as well as Kafka.

The NATS API is pretty small now: https://github.com/nats-io/nats.go/tree/main/jetstream

This came out about a month ago and cleans up the API into a best practices pattern with a small consistent API.

I have not looked too deeply at the code in https://github.com/asyncapi/event-gateway to see how easy this would be yet.

Please describe your project.

Its an open source science platform, and we use NATS and want to use Async API so that the behaviours of the GUI and the Backend can be changed at runtime by users themselves.

We found the following challenges or difficulties...

We we like NATS more than Kafka because its easier to deploy.

You can also get free NATS Server on www.synadia.com if you down want to run your own NATS Server Global Cluster.
Running a Global NATS Cluster is not hard though.

https://www.synadia.com/control-plane

I don't know how hard it is to run a fault tolerant Kafka that spans many regions. SO maybe NATS is useful here too for that reason.

Overall, we think Event-Gateway solving this is...

  • A 'nice to have'.
  • A must for us and decisive on using the Event Gateway.

Is there anything else you wish to share with us?

No response

Need for urgent changes in GitHub Actions automation

This issue defines a list of tasks that need to be performed in this repo to make sure it's ci/cd automation works long term without any issues.

It is up to maintainers to decide if it must be addressed in one or multiple PRs.

Below are 3 different sections describing 3 different important ci/cd changes.

IMPORTANT-START
For GitHub workflows that contain This workflow is centrally managed in https://github.com/asyncapi/.github/ you do not have to perform any work. These workflows were already updated through the update in .github. The only exception is the workflows related to nodejs release. More details in Upgrade Release pipeline - in case of nodejs projects section
IMPORTANT-END

Deprecation of way data is shared between steps

Every single GitHub Action workflow that has echo "::set-output name={name}::{value}" need to be updated to follow echo "{name}={value}" >> $GITHUB_OUTPUT

We do not yet know when set-output will stop working. Previous disable date was 31.05 but now then say community needs more time.

For more details read official article from GitHub

Deprecation of node12

2nd bullet point is still relevant for you even if your projects in not nodejs project

  • Every single workflow that uses setup-node action needs an update to follow v3 version of this action, and make sure minimum node 14 is used
  • Now this part is more complex. Problem with node12 is that node-based GitHub Actions were using it in majority as a runtime environment. Look for example at this action.yaml file for setup-node action v2. So the job that you have to do is go through all the workflows, and verify every single action that you use, make sure you are using the latest version that is not based on node12. I already did review a lot of actions as part of this PR so maybe you will find some actions there and can copy from me. For example action/checkout needs to be updated to v3.

Node12 end of support in action is probably September 27th.

For more details read official article from GitHub

Upgrade Release pipeline - in case of nodejs projects

ignore this section if your project is not nodejs project

You have 2 options. You can:

A. choose to switch to new release pipeline using instruction from asyncapi/.github#205

B. stay with old release pipeline, and manually update GitHub workflows and actions used in it, you can inspire a lot from this PR asyncapi/.github#226

I definitely recommend going with A

Workflows related to release:

  • .github/workflows/if-nodejs-release.yml
  • .github/workflows/if-nodejs-version-bump.yml
  • .github/workflows/bump.yml

Idea: Using EventGateway Websocket with Grafana

This is just a random thought and idea and would wonder if it would be useful or not...

After speaking to @smoya about Event Gateway I can see it's huge potential and great work going on ๐ŸŽ‰

One feature that's interesting to me is the websocket it will expose to tell us real-time stuff about event data (Validation etc).

I have tinkered with some websocket > influxdb > grafana solutions in the past, and I wonder if it could work here too.

Event gateway is dockerised at the moment, and you can get up and running with compose.

Imagine if:

  • There was another (optional) app you could run against it Grafana (thing)
  • You could visualise your events/errors in seconds with commisoned dashboards out the box

Why

You could send metadata, information from the events to this application, which can put it into Grafana and allow to to do all sorts:

  • Visualise the data in real time using Grafana tools
  • Setup alerting with Grafana with Pager Duty etc
  • And much more....

Just a random idea, would like to know what people think here ๐Ÿ‘

Simplify configuration of the app within the spec file

We want to keep this application to be configured mostly from reading an AsyncAPI file.
However, current configuration is done at proxied servers, which forces the user to modify their current servers (e.g. Kafka servers) adding a custom extension (x-eventgateway prefix).

Example:

servers:
  asyncapi-kafka-test:
    url: 'asyncapi-kafka-test-asyncapi-8f90.aivencloud.com:20472' # Kafka with 3 brokers.
    protocol: kafka-secure
    description: AsyncAPI Kafka test broker. Private.
    x-eventgateway-dial-mapping: '0.0.0.0:20473,event-gateway-demo.asyncapi.org:20473|0.0.0.0:20474,event-gateway-demo.asyncapi.org:20474|0.0.0.0:20475,event-gateway-demo.asyncapi.org:20475' # Dynamic ports starts at 20473

Instead, I suggest we rather configure the app by using the servers that describe the app.

servers:
  asyncapi-event-gateway-kafka-1:
    url: 'event-gateway-demo.asyncapi.org:28003'
    protocol: kafka
    x-eventgateway:
      proxy:
        kafka:
          server: asyncapi-kafka-test # as this is not a ref, this should be validated as we did in https://github.com/asyncapi/parser-js/pull/364
          dialMapping: '0.0.0.0:20473,event-gateway-demo.asyncapi.org:20473|0.0.0.0:20474,event-gateway-demo.asyncapi.org:20474|0.0.0.0:20475,event-gateway-demo.asyncapi.org:20475'

The benefits are multiple:

  • Existing servers are not changed.
  • Port binding + adverstised URL (the url of the actual application) is already available in the server.
  • Application can act as proxy of several servers by just creating a new server with a different port. Also supports different protocols. For example:
    servers:
      asyncapi-event-gateway-kafka-2:
        url: 'event-gateway-demo.asyncapi.org:28004' # note this port is different, but real server behind is the same.
        x-eventgateway:
          proxy:
            kafka:
              server: another-different-asyncapi-kafka-test
      asyncapi-event-gateway-mqtt:
        url: 'event-gateway-demo.asyncapi.org:1883'
        x-eventgateway:
          proxy:
            mqtt:
              server: a-mqtt-server

asyncapi 2.4.0 support

Hi Team

Could you please share your plan when are going to support asyncapi v2.4.0?

Other minor question: how can user define a name of error topic if validation fails? Is there some default value or can that be configured?

thx
tom

Hosting of demo app - propose changes

Yo, do we need to host the demo app? Do we know if anyone is using it?\

I suggest we stop doing it as I wanted to propose we stop using Kubernetes in Digital Ocean:

  • we stop deploying event gateway to kubernetes and if someone wants:
    • there can be instruction about using docker compose
    • we can just deploy super minimal demo directly using DO droplets
  • we stop deploying server-api to K8s as well, and just use droplet to host it

It's all about money. K8s cluster costs $96 a month and we just ran out of all funds we had because of this high cost.

We did use K8s as we envisioned that we will have many different projects hosted there, so K8s was good option in case at some point of time we need to move somewhere else away from DO.

The future appeared to be different. Thus imho we should move away from K8s.

thoughts?

Of course I will ask DO to give us more budget. But this doesn't mean that we should continue what we do

Automate Helm chart release version bump

Helm charts Chart.yaml file have a field called appVersion where the version of the application running behind should be specified.

Currently, this value is hardcoded to 0.1.0-alpha, but it will be ideal if we can sync it with the release version when bumping a new release.

Meaning, a new release of this repository will also include a version bump on that specific field.

Also, the tag value on the asyncapi-event-gateway helm chart should be in sync with it instead of having latest.

x-eventgateway-dial-mapping properties: comma string vs object properties

Hey,

I noticed in the examples I have seen the x-eventgateway-dial-mapping value is a comma seperated string with values example: 0.0.0.0:20473,event-gateway-demo.asyncapi.org:20473.

Coming at this new, I'm not really sure what these values mean, I know they are consumed by the gateway, but maybe it's best if we could be explicit to what they mean?

Proposal:

...
x-eventgateway-dial-mapping:
	proxyPort: 0.0.0.0:20473
	target: event-gateway-demo.asyncapi.org:20473
...

Ignore the property names, but hopefully, you get the idea, maybe we could describe them?

Thoughts @smoya ?

Consider supporting multiple AsyncAPI Docs as config source.

Just to add some context, the app configures the Kafka proxy based on the servers section and some properties (actually extensions) you must configure on the server(s).
Just to simplify, It basically configures a local port to map to a remote Kafka broker.

The question is: What are the usecase for supporting multiple AsyncAPI files?

As example, I could think there might be use cases where users might want to configure the app to act as Kafka proxy for several Kafka clusters. Clusters that could be found in different asyncapi.yaml files.
However, I see some technical complexity in here, since same servers could be declared in several files but with different names, same for channels related to those servers.

Is there a concise reason we might want to support configuring the event-gateway from several files instead of just using one?

cc @fmvilas @derberg

Create better Kubernetes liveness, readiness and startup Probes

Source: #71 (comment)

Probes should be based on the opened ports the app uses. That means, the application to be considered up, running and healthy should, at least:

  • Respond to the configured HTTP port. This is a global health check that can also be used for telling K8s the app is not running properly (for example, if we catch some internal errors, etc and we need to stop the execution).
  • Respond to the configured Websocket port.
  • All Kafka opened ports should be able to respond to a Kafka Metadata Request. This can be made by using kcat or in the worst case, just a simple nc connection test.

circular dependency

Hi Team
In version v0.2.0 I noticed something potential bug.
When using schema referenced in its own file:

messages:
  Funddata:
    payload:
      $ref: "./schema.json"

I get an error:

time="2022-06-10T11:44:37Z" level=fatal error="error decoding AsyncAPI json doc to Document struct: error parsing AsyncAPI doc: #./schema.json: invalid reference\n#/components/messages/Funddata/payload: circular dependency

But referencing the data in same file works:

asyncapi: 2.4.0
info:
  title: datahub
  version: 2.0.1
  description: Delivering not only funddata
  contact:
    name: datahub
channels:
  edb.c2.datahub.funddata.normalized.input.v1:
    subscribe:
      message:
        $ref: '#/components/messages/Funddata'
      bindings:
        x-six-edb:
          partitions: 30
          replicas: 2
          labels:
            edb.application/edb: assigned
            edb.pipeline/edb.c2.datahub.funddata.normalized.input: assigned
            edb.channel/edb.c2.datahub.funddata.normalized.input: assigned
  components:
  x-six-edb:
    roles:
      edb.datahub.publisher:
        - read
        - write
        - delete
      edb.flex.consumer:
        - read
      edb.sixid.consumer:
        - read
  messages:
    Funddata:
      payload:
        {
          "$schema": http://json-schema.org/draft-07/schema#,
          "title": "Demo data",
          "type": "object",
          "properties": {
            "metadata": {
              "type": "object",
              "properties": {
                "messageId": {
                  "format": "uuid",
                  "type": "string"
                }
              },
              "required": [
                  "messageId"
              ]
            },
            "body": {
              "type": "object",
              "properties": {
                "edb:itemIdentifier": {
                  "type": "object",
                  "properties": {
                    "six:isin": {
                      "type": "string",
                      "pattern": ".*([a-zA-Z]{2}[0-9a-zA-Z]{10}).*"
                    }
                  },
                  "required": [
                      "six:isin"
                  ]
                }
              },
              "required": [
                  "edb:itemIdentifier"
              ]
            }
          },
          "required": [
              "metadata",
              "body"
          ]
        }
servers:
  test:
    url: [broker.mybrokers.org:9092](http://broker.mybrokers.org:9092/)
    protocol: kafka
    x-eventgateway-listener: 28002 # optional. 0.0.0.0:9092 will be used instead if missing.
    x-eventgateway-dial-mapping: '0.0.0.0:28002,[test.myeventgateway.org:8092](http://test.myeventgateway.org:8092/)' # optional.

Would be nice to be able to keep it separate...
thx

Support configuring Kafka bootstrap servers both from a single server or multiple servers

Current configuration is limited to just one server, which is used as Kafka bootstrap server. For example:

servers:
    asyncapi-kafka-test:
        url: 'asyncapi-kafka-test-asyncapi-8f90.aivencloud.com:20472'
        protocol: kafka-secure

The event-gateway gets the value of url field and uses it as the only Kafka bootstrap server, mapping it to the same port but locally, meaning :20472 (can be changed through extension). Then it opens a new local port dynamically for each discovered broker (if there are).

However, users might want to configure several Kafka bootstrap servers (actually recommended).

There are several possibilities:

1. To group servers of the same cluster

There is a RFC0 implementation for this: asyncapi/spec#465.

We could support it from day one via extensions: x-cluster field or similar. Or rather add it as official change in the Kafka binding.

2. To add a new extension to the existing server

Support only one server, and add the rest of bootstrap servers in a new extension like x-extra-bootstrap-servers.

I'm completely leaning towards the first. But open to suggestions.

[kafka] Configure listener ports per each Kafka broker based on the AsyncAPI file.

The app should create listeners that listen Kafka requests in order to work.

The way I see listeners should be created on the app is related to the number of brokers (servers) the AsyncAPI file contains.
Meaning there should be a 1:1 relation between a broker and a local listener (port) on the app. For clarification, servers will be considered brokers only if they had the right protocol binding (kafka*).

In that way, we can emulate the behavior of connecting to real brokers and also isolate traffic, ensuring concurrency and avoiding bottlenecks (imagine 10 servers writing to the same port, where for some reason lag is increasing due to just one request).

In order to help configuring those listeners and to ensure the app opens up the ports the users wants, makes sense to allow users to set those down in the AsyncAPI file. More precisely, as an extension (at least by now) on the server side.
Something like x-eventgateway-listen: ':28002'

Update README with new logo banner

Reason/Context

This is to replace the old AsyncAPI logo in this repo's README with the banner attached below that represents the new branding.

Here are a few guidelines for this change as well:

  1. Make sure you are using Markdown syntax only
  2. Be sure to remove the old logo as well as the old title of the repo as this image will replace both elements
  3. Make sure you link this image to the website: https://www.asyncapi.com
  4. If there is any description text below the repo title, let's make it left-aligned if it isn't already, so as to match the left-alignment of the content in the new banner image

Download the image file:
github-repobanner-eventgateway.png.zip


Banner preview

Please note that this is only a preview of the image, the contributor should download and use the above zip file

github-repobanner-eventgateway

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.