Giter Club home page Giter Club logo

apicurio-registry's Introduction

Verify Build Workflow Join the chat at https://apicurio.zulipchat.com/ Automated Release Notes by gren

Apicurio Registry

An API/Schema registry - stores and retrieves APIs and Schemas.

Build Configuration

This project supports several build configuration options that affect the produced executables.

By default, mvn clean install produces an executable JAR with the dev Quarkus configuration profile enabled, and in-memory persistence implementation.

Apicurio Registry supports 4 persistence implementations:

  • In-Memory
  • KafkaSQL
  • PostgreSQL
  • SQL Server (community contributed and maintained)

Starting with Apicurio Registry 3.0, we now produce a single artifact suitable for running any storage variant.

Which storage variant will be used is determined by the following configuration:

Option Command argument Env. variable
Registry Storage Variant -Dapicurio.storage.kind APICURIO_STORAGE_KIND

For this property, there are three possible values:

  • sql - for the SQL storage variant.
  • kafkasql - for the KafkaSQL storage variant.
  • gitops - for the Gitops storage variant.

Additionally, there are 2 main configuration profiles:

  • dev - suitable for development, and
  • prod - for production environment.

Getting started (APIs)

./mvnw clean install -DskipTests
cd app/
../mvnw quarkus:dev

This should result in Quarkus and the in-memory registry starting up, with the REST APIs available on localhost port 8080:

Getting started (UI)

cd ui
npm install
cd ui-app
./init-dev.sh
npm run dev

This will start up the UI in development mode, hosted on port 8888 of your localhost:

For more information on the UI, see the UI module's README.md.

Build Options

  • -Pprod enables Quarkus's prod configuration profile, which uses configuration options suitable for a production environment, e.g. a higher logging level.
  • -Pnative (experimental) builds native executables. See Building a native executable.
  • -Ddocker (experimental) builds docker images. Make sure that you have the docker service enabled and running. If you get an error, try sudo chmod a+rw /var/run/docker.sock.

Runtime Configuration

The following parameters are available for executable files:

SQL

  • By default, the application expects an H2 server running at jdbc:h2:tcp://localhost:9123/mem:registry.
  • For configuring the database kind and the datasource values, the following configuration options are available:
Option Command argument Env. variable
Registry SQL storage kind -Dapicurio.storage.sql.kind APICURIO_STORAGE_SQL_KIND
Data Source URL -Dapicurio.datasource.url APICURIO_DATASOURCE_URL
DS Username -Dapicurio.datasource.username APICURIO_DATASOURCE_USERNAME
DS Password -Dapicurio.datasource.password APICURIO_DATASOURCE_PASSWORD

To see additional options, visit:

KafkaSQL

./mvnw clean install -Pprod -DskipTests builds the application artifact. The newly built runner can be found in /app/target

java Dapicurio.storage.kind=kafkasql -jar apicurio-registry-app-<version>-SNAPSHOT-runner.jar

For using Kafka as the persistent storage for the server information the only required configuration is to set the property apicurio.storage.kind.

Should result in Quarkus and the registry starting up, with the ui and APIs available on localhost port 8080. By default, this will look for a kafka instance on localhost:9092, see kafka-quickstart.

Alternatively this can be connected to a secured kafka instance. For example, the following command provides the runner with the necessary details to connect to a kafka instance using a PKCS12 certificate for TLS authentication and scram-sha-512 credentials for user authorisation.

java \
-Dapicurio.storage.kind=kafkasql \
-Dapicurio.kafka.common.bootstrap.servers=<kafka_bootstrap_server_address> \
-Dapicurio.kafka.common.ssl.truststore.location=<truststore_file_location>\
-Dapicurio.kafka.common.ssl.truststore.password=<truststore_file_password> \
-Dapicurio.kafka.common.ssl.truststore.type=PKCS12 \
-Dapicurio.kafka.common.security.protocol=SASL_SSL \
-Dapicurio.kafka.common.sasl.mechanism=SCRAM-SHA-512 \
-Dapicurio.kafka.common.sasl.jaas.config='org.apache.kafka.common.security.scram.ScramLoginModule required username="<username>" password="<password>";' \
-jar app/target/apicurio-registry-app-3.0.0-SNAPSHOT-runner.jar

This will start up the registry with the persistence managed by the external kafka cluster.

Docker containers

Every time a commit is pushed to main an updated docker image is built and pushed to Docker Hub. The image can be found in:

Run the above docker image like this:

docker run -it -p 8080:8080 apicurio/apicurio-registry:latest-snapshot

The same configuration options are available for the docker containers, but only in the form of environment variables (The command line parameters are for the java executable and at the moment it's not possible to pass them into the container). Each docker image will support the environment variable configuration options documented above for their respective storage type.

There are a variety of docker image tags to choose from when running the registry docker images. Each release of the project has a specific tag associated with it. So release 1.2.0.Final has an equivalent docker tag specific to that release. We also support the following moving tags:

  • latest-snapshot : represents the most recent docker image produced whenever the main branch is updated
  • latest : represents the latest stable (released) build of Apicurio Registry
  • latest-release : represents the latest stable (released) build of Apicurio Registry (alias for latest with clearer semantics)

Note that if you want to have access to the UI for Registry, you must also run the UI container image:

You might run the UI container image like this:

docker run -it -p 8888:8080 apicurio/apicurio-registry-ui:latest-snapshot

Once both container images are running as described above, you can access the following URLs:

Examples

Run Apicurio Registry with Postgres:

  • Compile using mvn clean install -DskipTests -Pprod -Ddocker

  • Then create a docker-compose file test.yml:

version: '3.1'

services:
  postgres:
    image: postgres
    environment:
      POSTGRES_USER: apicurio-registry
      POSTGRES_PASSWORD: password
  app:
    image: apicurio/apicurio-registry:3.0.0-SNAPSHOT
    ports:
      - 8080:8080
    environment:
      APICURIO_STORAGE_KIND: 'sql'
      APICURIO_STORAGE_DB_KIND: 'postgresql'
      APICURIO_DATASOURCE_URL: 'jdbc:postgresql://postgres/apicurio-registry'
      APICURIO_DATASOURCE_USERNAME: apicurio-registry
      APICURIO_DATASOURCE_PASSWORD: password
  • Run docker-compose -f test.yml up

Security

You can enable authentication for both the application REST APIs and the user interface using a server based on OpenID Connect (OIDC). The same server realm and users are federated across the user interface and the REST APIs using Open ID Connect so that you only require one set of credentials.

In order no enable this integration, you will need to set the following environment variables.

REST API Environment Variables

Option Env. variable
AUTH_ENABLED Set to true to enable (default is false)
KEYCLOAK_URL OIDC Server URL
KEYCLOAK_REALM OIDC Security realm
KEYCLOAK_API_CLIENT_ID The client for the API

User Interface Environment Variables

Option Env. variable
APICURIO_AUTH_TYPE Set to oidc (default is none)
APICURIO_AUTH_URL OIDC auth URL
APICURIO_AUTH_REDIRECT_URL OIDC redirect URL
APICURIO_AUTH_CLIENT_ID The client for the UI

Note that you will need to have everything configured in your OIDC provider, before starting the application (the realm and the two clients).

Please note that Registry supports a wide range of authentication and authorization options. These options are too extensive to document in this README. Consider the above to be just a starting point. For more information see the documentation on how to configure security in Registry.

Eclipse IDE

Some notes about using the Eclipse IDE with the Apicurio Registry codebase. Before importing the registry into your workspace, we recommend some configuration of the Eclipse IDE.

Lombok Integration

We use the Lombok code generation utility in a few places. This will cause problems when Eclipse builds the sources unless you install the Lombok+Eclipse integration. To do this, either download the Lombok JAR or find it in your .m2/repository directory (it will be available in .m2 if you've done a maven build of the registry). Once you find that JAR, simply "run" it (e.g. double-click it) and using the resulting UI installer to install Lombok support in Eclipse.

Maven Dependency Plugin (unpack, unpack-dependencies)

We use the maven-dependency-plugin in a few places to unpack a maven module in the reactor into another module. For example, the app module unpacks the contents of the ui module to include/embed the user interface into the running application. Eclipse does not like this. To fix this, configure the Eclipse Maven "Lifecycle Mappings" to ignore the usage of maven-dependency-plugin.

  • Open up Window->Preferences
  • Choose Maven->Lifecycle Mappings
  • Click the button labeled Open workspace lifecycle mappings metadata
  • This will open an XML file behind the preferences dialog. Click Cancel to close the Preferences.
  • Add the following section to the file:
    <pluginExecution>
      <pluginExecutionFilter>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-dependency-plugin</artifactId>
        <versionRange>3.1.2</versionRange>
        <goals>
          <goal>unpack</goal>
          <goal>unpack-dependencies</goal>
        </goals>
      </pluginExecutionFilter>
      <action>
        <ignore />
      </action>
    </pluginExecution>
  • Now go back into Maven->Lifecycle Mappings -> Maven->Lifecycle Mappings and click the Reload workspace lifecycle mappings metadata button.
  • If you've already imported the Apicurio projects, select all of them and choose Maven->Update Project.

Prevent Eclipse from aggressively cleaning generated classes

We use some Google Protobuf files and a maven plugin to generate some Java classes that get stored in various modules' target directories. These are then recognized by m2e but are sometimes deleted during the Eclipse "clean" phase. To prevent Eclipse from over-cleaning these files, find the os-maven-plugin-1.6.2.jar JAR in your .m2/repository directory and copy it into $ECLIPSE_HOME/dropins.

apicurio-registry's People

Contributors

ajaypratap003 avatar ajborley avatar alesj avatar amoncy avatar andreatp avatar apicurio-ci avatar blacktooth avatar carlesarnal avatar dang-gyg avatar dependabot-preview[bot] avatar dependabot[bot] avatar dmvolod avatar ericwittmann avatar famarting avatar hemahg avatar hhkkxxx133 avatar jhughes24816 avatar jsenko avatar paoloantinori avatar pmuir avatar pwright avatar red-hat-konflux[bot] avatar renjingxiao avatar riprasad avatar rkubis avatar rsvoboda avatar safarmirek avatar smccarthy-ie avatar suyash-naithani avatar tagarr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

apicurio-registry's Issues

Improve logging

We don't really have any useful logging in registry or in the serdes right now. We should go through and add appropriate logging (info/debug/tracing).

Validation Rule interface

We need a simple, common implementation that all validation rules will need to implement. This might be as simple as passing in the content and returning either a pass/fail or perhaps a list of errors (with empty list indicating PASS).

Examples of rules that we will implement:

  • Compatibility Rule (config and logic ported from Confluent avro compatibility feature)
  • Validation Rule (checks that the artifact conforms to configured company standards)

Downloading / Publishing YAML removes quotes from strings

When downloading or publishing an project api specification, we're seeing previously quoted values being outputted without quotes.

This is in turn leading to a bug with the fields being incorrectly rendered in tooling such as redoc — which typically converts the YAML into a JSON object

A repeatable example can be seen over at https://onlineyamltools.com/convert-yaml-to-json using the following example payloads...

Quoted Date String

Priority Order request:
  value:
    order:
      courier_type: priority
      delivery_address: 1 Union Street
      delivery_postcode: "2009"
      delivery_state: NSW
      delivery_suburb: Pyrmont
      authority_to_leave: "Yes"
      delivery_date: "2016-07-26"
      delivery_window: 16:00-19:00
      parcel_attributes:
      - qty: 1
        weight: 2.1

— delivery_date results in displaying as "2016-07-26"

Unquoted Date String

Priority Order request:
  value:
    order:
      courier_type: priority
      delivery_address: 1 Union Street
      delivery_postcode: "2009"
      delivery_state: NSW
      delivery_suburb: Pyrmont
      authority_to_leave: "Yes"
      delivery_date: 2016-07-26
      delivery_window: 16:00-19:00
      parcel_attributes:
      - qty: 1
        weight: 2.1

— delivery_date results in displaying as "2016-07-26T00:00:00.000Z"

Compatibility Rule: OpenAPI

Implement the compatibility rule for OpenAPI schemas. We'll need to really analyze the avro version to implement similar logic for OpenAPI.

Javadoc: add header comment to all .java files

Add the following header to all .java files in the project:

/*
 * Copyright 2019 Red Hat
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

Jersey client cannot parse replies from RESTEasy server

If Avro converter is used in Kafka COnnect then Jersey REST client must/should be used. The problem is that when an atifact like /artifacts/dbserver1-key is created the server anwers with

{"createdOn":1579862056239,"modifiedOn":1579864479545,"id":"dbserver1-key","version":3,"type":"AVRO","globalId":3}

Date fields are serialized as epoch milliseconds.

When the client tries to parse them it gets

connect_1    | SEVERE: Unable to deserialize property 'createdOn' because of: Error parsing class java.util.Date from value: 1579862056239. Check your @JsonbDateFormat has all time units for class java.util.Date type, or consider using org.eclipse.yasson.YassonProperties#ZERO_TIME_PARSE_DEFAULTING.
connect_1    | 2020-01-24 11:14:39,550 INFO   ||  WorkerSourceTask{id=inventory-connector-0} Committing offsets   [org.apache.kafka.connect.runtime.WorkerSourceTask]
connect_1    | 2020-01-24 11:14:39,550 INFO   ||  WorkerSourceTask{id=inventory-connector-0} flushing 0 outstanding messages for offset commit   [org.apache.kafka.connect.runtime.WorkerSourceTask]
connect_1    | 2020-01-24 11:14:39,550 ERROR  ||  WorkerSourceTask{id=inventory-connector-0} Task threw an uncaught and unrecoverable exception   [org.apache.kafka.connect.runtime.WorkerTask]
connect_1    | org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
connect_1    | 	at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
connect_1    | 	at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
connect_1    | 	at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:287)
connect_1    | 	at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:316)
connect_1    | 	at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:240)
connect_1    | 	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
connect_1    | 	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
connect_1    | 	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
connect_1    | 	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
connect_1    | 	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
connect_1    | 	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
connect_1    | 	at java.base/java.lang.Thread.run(Thread.java:834)
connect_1    | Caused by: javax.ws.rs.ProcessingException: Error deserializing object from entity stream.
connect_1    | 	at org.glassfish.jersey.jsonb.internal.JsonBindingProvider.readFrom(JsonBindingProvider.java:77)
connect_1    | 	at org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$TerminalReaderInterceptor.invokeReadFrom(ReaderInterceptorExecutor.java:233)
connect_1    | 	at org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$TerminalReaderInterceptor.aroundReadFrom(ReaderInterceptorExecutor.java:212)
connect_1    | 	at org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:132)
connect_1    | 	at org.glassfish.jersey.message.internal.MessageBodyFactory.readFrom(MessageBodyFactory.java:1071)
connect_1    | 	at org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(InboundMessageContext.java:850)
connect_1    | 	at org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(InboundMessageContext.java:810)
connect_1    | 	at org.glassfish.jersey.client.ClientResponse.readEntity(ClientResponse.java:339)
connect_1    | 	at org.glassfish.jersey.client.InboundJaxrsResponse$2.call(InboundJaxrsResponse.java:102)
connect_1    | 	at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
connect_1    | 	at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
connect_1    | 	at org.glassfish.jersey.internal.Errors.process(Errors.java:205)
connect_1    | 	at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:365)
connect_1    | 	at org.glassfish.jersey.client.InboundJaxrsResponse.runInScopeIfPossible(InboundJaxrsResponse.java:240)
connect_1    | 	at org.glassfish.jersey.client.InboundJaxrsResponse.readEntity(InboundJaxrsResponse.java:99)
connect_1    | 	at org.glassfish.jersey.microprofile.restclient.MethodModel.lambda$asynchronousCall$5(MethodModel.java:266)
connect_1    | 	at java.base/java.util.concurrent.CompletableFuture$UniAccept.tryFire(CompletableFuture.java:714)
connect_1    | 	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
connect_1    | 	at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
connect_1    | 	at org.glassfish.jersey.client.JerseyInvocation$1.completed(JerseyInvocation.java:789)
connect_1    | 	at org.glassfish.jersey.client.ClientRuntime.processResponse(ClientRuntime.java:203)
connect_1    | 	at org.glassfish.jersey.client.ClientRuntime.access$200(ClientRuntime.java:61)
connect_1    | 	at org.glassfish.jersey.client.ClientRuntime$2.lambda$response$0(ClientRuntime.java:154)
connect_1    | 	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
connect_1    | 	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
connect_1    | 	at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
connect_1    | 	at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
connect_1    | 	at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
connect_1    | 	at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:288)
connect_1    | 	at org.glassfish.jersey.client.ClientRuntime$2.response(ClientRuntime.java:154)
connect_1    | 	at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:268)
connect_1    | 	at org.glassfish.jersey.client.ClientRuntime.lambda$null$3(ClientRuntime.java:163)
connect_1    | 	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
connect_1    | 	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
connect_1    | 	at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
connect_1    | 	at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
connect_1    | 	at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
connect_1    | 	at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:288)
connect_1    | 	at org.glassfish.jersey.client.ClientRuntime.lambda$createRunnableForAsyncProcessing$4(ClientRuntime.java:139)
connect_1    | 	at org.glassfish.jersey.microprofile.restclient.ExecutorServiceWrapper.lambda$wrap$1(ExecutorServiceWrapper.java:124)
connect_1    | 	... 5 more
connect_1    | Caused by: javax.json.bind.JsonbException: Unable to deserialize property 'createdOn' because of: Error parsing class java.util.Date from value: 1579862056239. Check your @JsonbDateFormat has all time units for class java.util.Date type, or consider using org.eclipse.yasson.YassonProperties#ZERO_TIME_PARSE_DEFAULTING.
connect_1    | 	at org.eclipse.yasson.internal.serializer.AbstractContainerDeserializer.deserializeInternal(AbstractContainerDeserializer.java:90)
connect_1    | 	at org.eclipse.yasson.internal.serializer.AbstractContainerDeserializer.deserialize(AbstractContainerDeserializer.java:60)
connect_1    | 	at org.eclipse.yasson.internal.Unmarshaller.deserializeItem(Unmarshaller.java:68)
connect_1    | 	at org.eclipse.yasson.internal.Unmarshaller.deserialize(Unmarshaller.java:54)
connect_1    | 	at org.eclipse.yasson.internal.JsonBinding.deserialize(JsonBinding.java:53)
connect_1    | 	at org.eclipse.yasson.internal.JsonBinding.fromJson(JsonBinding.java:93)
connect_1    | 	at org.glassfish.jersey.jsonb.internal.JsonBindingProvider.readFrom(JsonBindingProvider.java:75)
connect_1    | 	... 44 more
connect_1    | Caused by: javax.json.bind.JsonbException: Error parsing class java.util.Date from value: 1579862056239. Check your @JsonbDateFormat has all time units for class java.util.Date type, or consider using org.eclipse.yasson.YassonProperties#ZERO_TIME_PARSE_DEFAULTING.
connect_1    | 	at org.eclipse.yasson.internal.serializer.AbstractDateTimeDeserializer.deserialize(AbstractDateTimeDeserializer.java:71)
connect_1    | 	at org.eclipse.yasson.internal.serializer.AbstractValueTypeDeserializer.deserialize(AbstractValueTypeDeserializer.java:64)
connect_1    | 	at org.eclipse.yasson.internal.serializer.ObjectDeserializer.deserializeNext(ObjectDeserializer.java:174)
connect_1    | 	at org.eclipse.yasson.internal.serializer.AbstractContainerDeserializer.deserializeInternal(AbstractContainerDeserializer.java:84)
connect_1    | 	... 50 more
connect_1    | Caused by: java.time.format.DateTimeParseException: Text '1579862056239' could not be parsed at index 0
connect_1    | 	at java.base/java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:2046)
connect_1    | 	at java.base/java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1948)
connect_1    | 	at java.base/java.time.ZonedDateTime.parse(ZonedDateTime.java:598)
connect_1    | 	at org.eclipse.yasson.internal.serializer.DateTypeDeserializer.parseWithOrWithoutZone(DateTypeDeserializer.java:83)
connect_1    | 	at org.eclipse.yasson.internal.serializer.DateTypeDeserializer.parseDefault(DateTypeDeserializer.java:54)
connect_1    | 	at org.eclipse.yasson.internal.serializer.DateTypeDeserializer.parseDefault(DateTypeDeserializer.java:34)
connect_1    | 	at org.eclipse.yasson.internal.serializer.AbstractDateTimeDeserializer.deserialize(AbstractDateTimeDeserializer.java:69)
connect_1    | 	... 53 more

The clients expects it to be a string formatted value.

Validation Rule: Avro

Implement the Validation Rule for Avro content. Rules will need to be defined for this.

Maintenance: Maven build checkup

Go over the POM files, profiles, tests etc. in case there is some maintenance work needed. At the very least, we need to make sure that mvn clean install will build, test, and package all of the supported storage variants. Right now the variants are all separated out into profiles, because the tests for "streams", "kafka", and "infinispan" will all fail because they all depend on external systems to function. We need to work on standing up those systems automatically so that tests can be run successfully.

Compatibility Rule: protobuff

Implement the compatibility rule for protobuff schemas. We'll need to really analyze the avro version to implement similar logic for protobuff.

Document: content-types for supported formats

We're going to be storing artifacts of the following formats in the registry:

  • JSON Schema
  • Avro
  • Protobuff
  • OpenAPI
  • AsyncAPI

Most of these are in JSON format (or YAML which can easily be converted). But at least Protobuff is not. So there is an open question about how to handle the different content types when pushing content into the registry via the API and then returning it later.

The first step is to identify the actual content types for each format. And then additionally how to handle that in the API itself. In order to make it as simple as possible, we could try to figure out the type based on the following (in order):

  • custom Request header e.g. X-Registry-Type
  • check the Request's Content-Type for something that disambiguates the content. For example the request Content-Type might be application/json+openapi
  • try to figure out the type from the content itself. This will work well for OpenAPI and AsyncAPI and probably Protobuff, but I'm not sure what else.

Compatibility Rule: avro

Implement the compatibility rule for avro schemas. This can be lifted from Perspicuus most likely.

Compatibility Rule: AsyncAPI

Implement the compatibility rule for AsyncAPI schemas. We'll need to really analyze the avro version to implement similar logic for AsyncAPI.

SAML support?

Has anyone managed to get their own setup working against corporate security with SAML? We're trying to have SAML for all applications and have an in house version of apicurio for our development teams to use.

I attempted to wire the code directly to SAML and bypass keycloak but was unable to figure out how JBOSS security is wired up in this area as its inside the server and no matter how many google searches I took no luck finding anyone else doing this either.

Seeing the code is setup for Keycloak now wondering if keycloak can use saml as an identify provider but maybe someone else has done this before.

Improve error handling in serdes

If the user does not send the right thing to the topic when sending a message, the serdes Serializer doesn't throw a sensible error. It just throws a "404 not found" error. I think this is because it's trying to find the Schema in the registry even when the message is e.g. a simple type. It would better if the serializer could detect that the data being sent isn't appropriate for serialization (e.g. it's just a simple type) and throw an appropriate error.

Determine artifact type based on content

If the user does not provide the artifact type via a request header, then the registry needs to probe the content to try and determine the type. This amounts to attempting to parse the content as JSON, then as protobuf. If it's JSON, then further analysis is needed to determine if it's an OpenAPI, AsyncAPI, JSON Schema, or Avro artifact.

Compatibility Rule: JSON Schema

Implement the compatibility rule for JSON Schema schemas. We'll need to really analyze the avro version to implement similar logic for JSON Schema.

Upgrade to Quarkus 0.26.1

Quarkus 0.26.1 has already changed enough that this project fails to compile against it. :(
Following bleeding edge projects is hard.

API: Design the registry API

Create a REST API for the registry that is more generic than the confluent compatible API - it should handle multiple content types and support rules instead of just compatibility.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.