Giter Club home page Giter Club logo

Comments (14)

ImFlog avatar ImFlog commented on May 27, 2024

Hi,
Thank you for the interest in this plugin.
It may be related to this issue : #15
Right now (but will soon change) we are on version : implementation("io.confluent", "kafka-avro-serializer", "3.2.1")

Try to override the version to see if it works better.

from schema-registry-plugin.

AmalVR avatar AmalVR commented on May 27, 2024

Hi,
The project I have created only contains Avro schemas, as you can see in the gradle configuration I haven't included any dependencies for avro serialization and I expect the plugin is working with its own dependencies and hence as in the case of #15 we cannot say it is related to version compatibility. But it is obvious there is some issue happening during serialization.
Anyway, I tried adding the dependency for avro serializer with version "3.2.1", but I am getting the same error.

from schema-registry-plugin.

ImFlog avatar ImFlog commented on May 27, 2024

That's strange, I just tried on my machine with your configuration and It worked.
You may have a bad character in your schema file (that's what the error seems to tell). Can you verify you use UTF-8 and that there is no hidden char in it ? If you have Windows Linux Subsystem you can do a cat -v mysample.avsc to spot hidden chars.

If this don't work, can you tell me how you launch the schema-registry locally ?

from schema-registry-plugin.

AmalVR avatar AmalVR commented on May 27, 2024

I verified the file, we are using UTF-8 and I couldnt see any hidden character using cat -v

We are using K8 in Windows 10 with the below YAML to start the schema-registry.

apiVersion: v1
kind: List
items:

- apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: schema-registry
    labels:
      my.app: schema-registry
  spec:
    replicas: 1
    selector:
      matchLabels:
        my.app: schema-registry
    strategy:
      type: Recreate
    template:
      metadata:
        labels:
          my.app: schema-registry
      spec:
        containers:
        - name: schema-registry
          image: confluentinc/cp-schema-registry:5.0.0
          env:
          - name: SCHEMA_REGISTRY_HOST_NAME
            value: schema-registry-service
          - name: SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL
            value: zookeeper-service:2181
          - name: SCHEMA_REGISTRY_LISTENERS
            value: http://0.0.0.0:8081
          - name: SCHEMA_REGISTRY_DEBUG
            value: "true"          
          ports:
          - containerPort: 8081
          resources:
            requests:
              memory: "1Gi"
              cpu: "250m"
            limits:
              memory: "2Gi"
              cpu: "1"
        restartPolicy: Always


- apiVersion: v1
  kind: Service
  metadata:
    name: schema-registry-service
    labels:
      my.app: schema-registry
  spec:
    selector:
       my.app: schema-registry
    ports:
    - protocol: TCP
      port: 8081

from schema-registry-plugin.

ImFlog avatar ImFlog commented on May 27, 2024

Searching a bit on confluent registry, I found this : confluentinc/schema-registry#733. Made me realize that you may have an error on server side but not correctly output on the client side.

Can you check the logs of your application running the schema-registry @AmalVR ? Do not hesitate to enable debug to have more data.

from schema-registry-plugin.

ImFlog avatar ImFlog commented on May 27, 2024

@AmalVR Did you manage to fetch logs on the container side ?

from schema-registry-plugin.

AmalVR avatar AmalVR commented on May 27, 2024

Yes I do, please find below
. /etc/confluent/docker/mesos-setup.sh

#!/usr/bin/env bash

set +o nounset
++ set +o nounset

if [ -z $SKIP_MESOS_AUTO_SETUP ]; then
    if [ -n $MESOS_SANDBOX ] && [ -e $MESOS_SANDBOX/.ssl/scheduler.crt ] && [ -e $MESOS_SANDBOX/.ssl/scheduler.key ]; then
        echo "Entering Mesos auto setup for Java SSL truststore. You should not see this if you are not on mesos ..."

        openssl pkcs12 -export -in $MESOS_SANDBOX/.ssl/scheduler.crt -inkey $MESOS_SANDBOX/.ssl/scheduler.key \
                       -out /tmp/keypair.p12 -name keypair \
                       -CAfile $MESOS_SANDBOX/.ssl/ca-bundle.crt -caname root -passout pass:export

        keytool -importkeystore \
                -deststorepass changeit -destkeypass changeit -destkeystore /tmp/kafka-keystore.jks \
                -srckeystore /tmp/keypair.p12 -srcstoretype PKCS12 -srcstorepass export \
                -alias keypair

        keytool -import \
                -trustcacerts \
                -alias root \
                -file $MESOS_SANDBOX/.ssl/ca-bundle.crt \
                -storepass changeit \
                -keystore /tmp/kafka-truststore.jks -noprompt
    fi
fi
++ '[' -z ']'
++ '[' -n ']'
++ '[' -e /.ssl/scheduler.crt ']'

set -o nounset
++ set -o nounset

. /etc/confluent/docker/apply-mesos-overrides

#!/usr/bin/env bash
#
# Copyright 2016 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Mesos DC/OS docker deployments will have HOST and PORT0 
# set for the proxying of the service.
# 
# Use those values provide things we know we'll need.

[ -n "${HOST:-}" ] && [ -z "${SCHEMA_REGISTRY_HOST_NAME:-}" ] && \
	export SCHEMA_REGISTRY_HOST_NAME=$HOST || true # we don't want the setup to fail if not on Mesos
++ '[' -n '' ']'
++ true


echo "===> ENV Variables ..."
+ echo '===> ENV Variables ...'
env | sort
===> ENV Variables ...
+ env
+ sort
ALLOW_UNSIGNED=false
COMPONENT=schema-registry
CONFLUENT_DEB_VERSION=1
CONFLUENT_MAJOR_VERSION=5
CONFLUENT_MINOR_VERSION=0
CONFLUENT_MVN_LABEL=
CONFLUENT_PATCH_VERSION=0
CONFLUENT_PLATFORM_LABEL=
CONFLUENT_VERSION=5.0.0
CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
HOME=/root
HOSTNAME=schema-registry-58b954b788-jpv7t
KAFKA_LOAD_BALANCER_PORT=tcp://10.111.4.150:9092
KAFKA_LOAD_BALANCER_PORT_9092_TCP=tcp://10.111.4.150:9092
KAFKA_LOAD_BALANCER_PORT_9092_TCP_ADDR=10.111.4.150
KAFKA_LOAD_BALANCER_PORT_9092_TCP_PORT=9092
KAFKA_LOAD_BALANCER_PORT_9092_TCP_PROTO=tcp
KAFKA_LOAD_BALANCER_SERVICE_HOST=10.111.4.150
KAFKA_LOAD_BALANCER_SERVICE_PORT=9092
KAFKA_SERVICE_PORT=tcp://10.111.51.81:9092
KAFKA_SERVICE_PORT_9092_TCP=tcp://10.111.51.81:9092
KAFKA_SERVICE_PORT_9092_TCP_ADDR=10.111.51.81
KAFKA_SERVICE_PORT_9092_TCP_PORT=9092
KAFKA_SERVICE_PORT_9092_TCP_PROTO=tcp
KAFKA_SERVICE_SERVICE_HOST=10.111.51.81
KAFKA_SERVICE_SERVICE_PORT=9092
KAFKA_VERSION=2.0.0
KSQL_SERVER_LOAD_BALANCER_PORT=tcp://10.99.109.91:8088
KSQL_SERVER_LOAD_BALANCER_PORT_8088_TCP=tcp://10.99.109.91:8088
KSQL_SERVER_LOAD_BALANCER_PORT_8088_TCP_ADDR=10.99.109.91
KSQL_SERVER_LOAD_BALANCER_PORT_8088_TCP_PORT=8088
KSQL_SERVER_LOAD_BALANCER_PORT_8088_TCP_PROTO=tcp
KSQL_SERVER_LOAD_BALANCER_SERVICE_HOST=10.99.109.91
KSQL_SERVER_LOAD_BALANCER_SERVICE_PORT=8088
KSQL_SERVICE_PORT=tcp://10.107.81.64:8088
KSQL_SERVICE_PORT_8088_TCP=tcp://10.107.81.64:8088
KSQL_SERVICE_PORT_8088_TCP_ADDR=10.107.81.64
KSQL_SERVICE_PORT_8088_TCP_PORT=8088
KSQL_SERVICE_PORT_8088_TCP_PROTO=tcp
KSQL_SERVICE_SERVICE_HOST=10.107.81.64
KSQL_SERVICE_SERVICE_PORT=8088
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
LANG=C.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
PYTHON_PIP_VERSION=8.1.2
PYTHON_VERSION=2.7.9-1
SCALA_VERSION=2.11
SCHEMA_REGISTRY_DEBUG=true
SCHEMA_REGISTRY_HOST_NAME=schema-registry-service
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=zookeeper-service:2181
SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081
SCHEMA_REGISTRY_LOAD_BALANCER_PORT=tcp://10.98.24.41:8081
SCHEMA_REGISTRY_LOAD_BALANCER_PORT_8081_TCP=tcp://10.98.24.41:8081
SCHEMA_REGISTRY_LOAD_BALANCER_PORT_8081_TCP_ADDR=10.98.24.41
SCHEMA_REGISTRY_LOAD_BALANCER_PORT_8081_TCP_PORT=8081
SCHEMA_REGISTRY_LOAD_BALANCER_PORT_8081_TCP_PROTO=tcp
SCHEMA_REGISTRY_LOAD_BALANCER_SERVICE_HOST=10.98.24.41
SCHEMA_REGISTRY_LOAD_BALANCER_SERVICE_PORT=8081
SCHEMA_REGISTRY_SERVICE_PORT=tcp://10.101.200.185:8081
SCHEMA_REGISTRY_SERVICE_PORT_8081_TCP=tcp://10.101.200.185:8081
SCHEMA_REGISTRY_SERVICE_PORT_8081_TCP_ADDR=10.101.200.185
SCHEMA_REGISTRY_SERVICE_PORT_8081_TCP_PORT=8081
SCHEMA_REGISTRY_SERVICE_PORT_8081_TCP_PROTO=tcp
SCHEMA_REGISTRY_SERVICE_SERVICE_HOST=10.101.200.185
SCHEMA_REGISTRY_SERVICE_SERVICE_PORT=8081
SHLVL=1
ZOOKEEPER_SERVICE_PORT=tcp://10.99.159.163:2181
ZOOKEEPER_SERVICE_PORT_2181_TCP=tcp://10.99.159.163:2181
ZOOKEEPER_SERVICE_PORT_2181_TCP_ADDR=10.99.159.163
ZOOKEEPER_SERVICE_PORT_2181_TCP_PORT=2181
ZOOKEEPER_SERVICE_PORT_2181_TCP_PROTO=tcp
ZOOKEEPER_SERVICE_SERVICE_HOST=10.99.159.163
ZOOKEEPER_SERVICE_SERVICE_PORT=2181
ZULU_OPENJDK_VERSION=8=8.30.0.1

===> User

echo "===> User"

  • id
    uid=0(root) gid=0(root) groups=0(root)

echo "===> Configuring ..."
/etc/confluent/docker/configure
===> Configuring ...

  • dub ensure-atleast-one SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
  • dub ensure-atleast-one SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
  • dub ensure SCHEMA_REGISTRY_HOST_NAME
  • dub path /etc/schema-registry/ writable
if [[ -n "${SCHEMA_REGISTRY_PORT-}" ]]
then
  echo "PORT is deprecated. Please use SCHEMA_REGISTRY_LISTENERS instead."
  exit 1
fi
+ [[ -n '' ]]

if [[ -n "${SCHEMA_REGISTRY_JMX_OPTS-}" ]]
then
  if [[ ! $SCHEMA_REGISTRY_JMX_OPTS == *"com.sun.management.jmxremote.rmi.port"*  ]]
  then
    echo "SCHEMA_REGISTRY_OPTS should contain 'com.sun.management.jmxremote.rmi.port' property. It is required for accessing the JMX metrics externally."
  fi
fi
  • `dub template /etc/confluent/docker/schema-registry.properties.template /etc/schema-registry/schema-registry.properties
  • dub template /etc/confluent/docker/log4j.properties.template /etc/schema-registry/log4j.properties
  • dub template /etc/confluent/docker/admin.properties.template /etc/schema-registry/admin.properties

echo "===> Running preflight checks ... "
===> Running preflight checks ...

  • echo '===> Running preflight checks ... '
  • /etc/confluent/docker/ensure
if [[ -n "${SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL-}" ]]
then
    echo "===> Check if Zookeeper is healthy ..."
    cub zk-ready "$SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL" "${SCHEMA_REGISTRY_CUB_ZK_TIMEOUT:-40}"
fi
+ [[ -n zookeeper-service:2181 ]]
+ echo '===> Check if Zookeeper is healthy ...'
===> Check if Zookeeper is healthy ...
+ cub zk-ready zookeeper-service:2181 40
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=schema-registry-58b954b788-jpv7t
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_172
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.9.125-linuxkit
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
[main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper-service:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b1bc7ed
[main-SendThread(zookeeper-service:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper-service/10.99.159.163:2181. Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(zookeeper-service:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to zookeeper-service/10.99.159.163:2181, initiating session
[main-SendThread(zookeeper-service:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper-service/10.99.159.163:2181, sessionid = 0x100000264810009, negotiated timeout = 40000
[main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x100000264810009 closed

===> Check if Kafka is healthy ...
echo "===> Check if Kafka is healthy ..."

  • echo '===> Check if Kafka is healthy ...'
if [[ -n "${SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL-}" ]] && [[ $SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL != "PLAINTEXT" ]]
then
    cub kafka-ready \
        "${SCHEMA_REGISTRY_CUB_KAFKA_MIN_BROKERS:-1}" \
        "${SCHEMA_REGISTRY_CUB_KAFKA_TIMEOUT:-40}" \
        -b "${SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS}" \
        --config /etc/"${COMPONENT}"/admin.properties
else
    if [[ -n "${SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL-}" ]]
    then
        cub kafka-ready \
            "${SCHEMA_REGISTRY_CUB_KAFKA_MIN_BROKERS:-1}" \
            "${SCHEMA_REGISTRY_CUB_KAFKA_TIMEOUT:-40}" \
            -z "$SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL"
    elif [[ -n "${SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS-}" ]]
    then
        cub kafka-ready \
            "${KAFKA_REST_CUB_KAFKA_MIN_BROKERS:-1}" \
            "${KAFKA_REST_CUB_KAFKA_TIMEOUT:-40}" \
            -b "${SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS}"
    fi
fi
  • [[ -n '' ]]
  • [[ -n zookeeper-service:2181 ]]
  • cub kafka-ready 1 40 -z zookeeper-service:2181
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=schema-registry-58b954b788-jpv7t
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_172
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.9.125-linuxkit
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
[main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper-service:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@1b2c6ec2
[main-SendThread(zookeeper-service:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper-service/10.99.159.163:2181. Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(zookeeper-service:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to zookeeper-service/10.99.159.163:2181, initiating session
[main-SendThread(zookeeper-service:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper-service/10.99.159.163:2181, sessionid = 0x10000026481000a, negotiated timeout = 40000
[main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x10000026481000a
[main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x10000026481000a closed
[main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper-service:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@685f4c2e
[main-SendThread(zookeeper-service:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper-service/10.99.159.163:2181. Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(zookeeper-service:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to zookeeper-service/10.99.159.163:2181, initiating session
[main-SendThread(zookeeper-service:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper-service/10.99.159.163:2181, sessionid = 0x10000026481000b, negotiated timeout = 40000
[main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x10000026481000b closed
[main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x10000026481000b
[main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: 
	bootstrap.servers = [kafka-service:9092]
	client.id = 
	connections.max.idle.ms = 300000
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 120000
	retries = 5
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS

[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.0.0-cpNone
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : ca8d91be74ec83ed

echo "===> Launching ... "

  • echo '===> Launching ... '
    exec /etc/confluent/docker/launch
    ===> Launching ...
  • exec /etc/confluent/docker/launch
    ===> Launching schema-registry ...
[2019-04-12 07:14:16,774] INFO SchemaRegistryConfig values: 
	resource.extension.class = []
	metric.reporters = []
	kafkastore.sasl.kerberos.kinit.cmd = /usr/bin/kinit
	response.mediatype.default = application/vnd.schemaregistry.v1+json
	kafkastore.ssl.trustmanager.algorithm = PKIX
	inter.instance.protocol = http
	authentication.realm = 
	ssl.keystore.type = JKS
	kafkastore.topic = _schemas
	metrics.jmx.prefix = kafka.schema.registry
	kafkastore.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
	kafkastore.topic.replication.factor = 3
	ssl.truststore.password = [hidden]
	kafkastore.timeout.ms = 500
	host.name = schema-registry-service
	kafkastore.bootstrap.servers = []
	schema.registry.zk.namespace = schema_registry
	kafkastore.sasl.kerberos.ticket.renew.window.factor = 0.8
	kafkastore.sasl.kerberos.service.name = 
	schema.registry.resource.extension.class = []
	ssl.endpoint.identification.algorithm = 
	compression.enable = false
	kafkastore.ssl.truststore.type = JKS
	avro.compatibility.level = backward
	kafkastore.ssl.protocol = TLS
	kafkastore.ssl.provider = 
	kafkastore.ssl.truststore.location = 
	response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
	kafkastore.ssl.keystore.type = JKS
	authentication.skip.paths = []
	ssl.truststore.type = JKS
	kafkastore.ssl.truststore.password = [hidden]
	access.control.allow.origin = 
	ssl.truststore.location = 
	ssl.keystore.password = [hidden]
	port = 8081
	kafkastore.ssl.keystore.location = 
	metrics.tag.map = {}
	master.eligibility = true
	ssl.client.auth = false
	kafkastore.ssl.keystore.password = [hidden]
	websocket.path.prefix = /ws
	kafkastore.security.protocol = PLAINTEXT
	ssl.trustmanager.algorithm = 
	authentication.method = NONE
	request.logger.name = io.confluent.rest-utils.requests
	ssl.key.password = [hidden]
	kafkastore.zk.session.timeout.ms = 30000
	kafkastore.sasl.mechanism = GSSAPI
	kafkastore.sasl.kerberos.ticket.renew.jitter = 0.05
	kafkastore.ssl.key.password = [hidden]
	zookeeper.set.acl = false
	schema.registry.inter.instance.protocol = 
	authentication.roles = [*]
	metrics.num.samples = 2
	ssl.protocol = TLS
	schema.registry.group.id = schema-registry
	kafkastore.ssl.keymanager.algorithm = SunX509
	kafkastore.connection.url = zookeeper-service:2181
	debug = true
	listeners = [http://0.0.0.0:8081]
	kafkastore.group.id = 
	ssl.provider = 
	ssl.enabled.protocols = []
	shutdown.graceful.ms = 1000
	ssl.keystore.location = 
	ssl.cipher.suites = []
	kafkastore.ssl.endpoint.identification.algorithm = 
	kafkastore.ssl.cipher.suites = 
	access.control.allow.methods = 
	kafkastore.sasl.kerberos.min.time.before.relogin = 60000
	ssl.keymanager.algorithm = 
	metrics.sample.window.ms = 30000
	kafkastore.init.timeout.ms = 60000
 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig)
[2019-04-12 07:14:17,060] INFO Logging initialized @1451ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2019-04-12 07:14:19,177] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2019-04-12 07:14:19,184] INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,184] INFO Client environment:host.name=schema-registry-58b954b788-jpv7t (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,184] INFO Client environment:java.version=1.8.0_172 (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,184] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,185] INFO Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,185] INFO Client environment:java.class.path=:/usr/bin/../package-schema-registry/target/kafka-schema-registry-package-*-development/share/java/schema-registry/*:/usr/bin/../share/java/confluent-common/zookeeper-3.4.13.jar:/usr/bin/../share/java/confluent-common/common-metrics-5.0.0.jar:/usr/bin/../share/java/confluent-common/log4j-1.2.17.jar:/usr/bin/../share/java/confluent-common/audience-annotations-0.5.0.jar:/usr/bin/../share/java/confluent-common/netty-3.10.6.Final.jar:/usr/bin/../share/java/confluent-common/jline-0.9.94.jar:/usr/bin/../share/java/confluent-common/slf4j-api-1.7.25.jar:/usr/bin/../share/java/confluent-common/zkclient-0.10.jar:/usr/bin/../share/java/confluent-common/common-config-5.0.0.jar:/usr/bin/../share/java/confluent-common/common-utils-5.0.0.jar:/usr/bin/../share/java/confluent-common/build-tools-5.0.0.jar:/usr/bin/../share/java/rest-utils/asm-tree-6.2.jar:/usr/bin/../share/java/rest-utils/activation-1.1.1.jar:/usr/bin/../share/java/rest-utils/javax.annotation-api-1.2.jar:/usr/bin/../share/java/rest-utils/jetty-security-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/javax.inject-1.jar:/usr/bin/../share/java/rest-utils/javax-websocket-server-impl-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jetty-util-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/hk2-utils-2.5.0-b42.jar:/usr/bin/../share/java/rest-utils/javax.websocket-api-1.0.jar:/usr/bin/../share/java/rest-utils/hibernate-validator-5.1.3.Final.jar:/usr/bin/../share/java/rest-utils/jetty-server-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jetty-plus-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jersey-container-servlet-2.27.jar:/usr/bin/../share/java/rest-utils/websocket-api-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jetty-xml-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/hk2-locator-2.5.0-b42.jar:/usr/bin/../share/java/rest-utils/jackson-module-jaxb-annotations-2.9.6.jar:/usr/bin/../share/java/rest-utils/jersey-common-2.27.jar:/usr/bin/../share/java/rest-utils/jetty-client-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jboss-logging-3.1.3.GA.jar:/usr/bin/../share/java/rest-utils/jetty-jaas-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/asm-6.2.jar:/usr/bin/../share/java/rest-utils/jersey-container-servlet-core-2.27.jar:/usr/bin/../share/java/rest-utils/jetty-webapp-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/osgi-resource-locator-1.0.1.jar:/usr/bin/../share/java/rest-utils/jackson-core-2.9.6.jar:/usr/bin/../share/java/rest-utils/jersey-server-2.27.jar:/usr/bin/../share/java/rest-utils/aopalliance-repackaged-2.5.0-b42.jar:/usr/bin/../share/java/rest-utils/jackson-annotations-2.9.6.jar:/usr/bin/../share/java/rest-utils/hk2-api-2.5.0-b42.jar:/usr/bin/../share/java/rest-utils/rest-utils-5.0.0.jar:/usr/bin/../share/java/rest-utils/jersey-media-jaxb-2.27.jar:/usr/bin/../share/java/rest-utils/javax.ws.rs-api-2.1.jar:/usr/bin/../share/java/rest-utils/jetty-servlet-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/asm-commons-6.2.jar:/usr/bin/../share/java/rest-utils/websocket-server-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/rest-utils/jackson-databind-2.9.6.jar:/usr/bin/../share/java/rest-utils/jersey-hk2-2.27.jar:/usr/bin/../share/java/rest-utils/jetty-continuation-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jetty-http-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jackson-jaxrs-json-provider-2.9.6.jar:/usr/bin/../share/java/rest-utils/javax.el-api-2.2.4.jar:/usr/bin/../share/java/rest-utils/jetty-jmx-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/websocket-common-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/javassist-3.22.0-CR2.jar:/usr/bin/../share/java/rest-utils/classmate-1.0.0.jar:/usr/bin/../share/java/rest-utils/jetty-jndi-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jaxb-api-2.3.0.jar:/usr/bin/../share/java/rest-utils/asm-analysis-6.2.jar:/usr/bin/../share/java/rest-utils/websocket-servlet-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/javax.el-2.2.4.jar:/usr/bin/../share/java/rest-utils/jersey-bean-validation-2.27.jar:/usr/bin/../share/java/rest-utils/jetty-servlets-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/javax.websocket-client-api-1.0.jar:/usr/bin/../share/java/rest-utils/jackson-jaxrs-base-2.9.6.jar:/usr/bin/../share/java/rest-utils/javax-websocket-client-impl-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/javax.inject-2.5.0-b42.jar:/usr/bin/../share/java/rest-utils/validation-api-1.1.0.Final.jar:/usr/bin/../share/java/rest-utils/jersey-client-2.27.jar:/usr/bin/../share/java/rest-utils/jetty-io-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/websocket-client-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jetty-annotations-9.4.11.v20180605.jar:/usr/bin/../share/java/schema-registry/javax.annotation-api-1.2.jar:/usr/bin/../share/java/schema-registry/zookeeper-3.4.13.jar:/usr/bin/../share/java/schema-registry/confluent-licensing-new-5.0.0.jar:/usr/bin/../share/java/schema-registry/gson-2.7.jar:/usr/bin/../share/java/schema-registry/hibernate-validator-5.1.3.Final.jar:/usr/bin/../share/java/schema-registry/metrics-core-2.2.0.jar:/usr/bin/../share/java/schema-registry/confluent-schema-registry-security-plugin-5.0.0.jar:/usr/bin/../share/java/schema-registry/log4j-1.2.17.jar:/usr/bin/../share/java/schema-registry/jopt-simple-5.0.4.jar:/usr/bin/../share/java/schema-registry/avro-1.8.1.jar:/usr/bin/../share/java/schema-registry/audience-annotations-0.5.0.jar:/usr/bin/../share/java/schema-registry/scala-reflect-2.11.12.jar:/usr/bin/../share/java/schema-registry/jackson-mapper-asl-1.9.13.jar:/usr/bin/../share/java/schema-registry/netty-3.10.6.Final.jar:/usr/bin/../share/java/schema-registry/snappy-java-1.1.7.1.jar:/usr/bin/../share/java/schema-registry/jersey-common-2.27.jar:/usr/bin/../share/java/schema-registry/jboss-logging-3.1.3.GA.jar:/usr/bin/../share/java/schema-registry/jline-0.9.94.jar:/usr/bin/../share/java/schema-registry/kafka-schema-registry-client-5.0.0.jar:/usr/bin/../share/java/schema-registry/osgi-resource-locator-1.0.1.jar:/usr/bin/../share/java/schema-registry/jackson-core-2.9.6.jar:/usr/bin/../share/java/schema-registry/jersey-server-2.27.jar:/usr/bin/../share/java/schema-registry/confluent-security-plugins-common-5.0.0.jar:/usr/bin/../share/java/schema-registry/jackson-annotations-2.9.6.jar:/usr/bin/../share/java/schema-registry/kafka-schema-registry-5.0.0.jar:/usr/bin/../share/java/schema-registry/protobuf-java-util-3.4.0.jar:/usr/bin/../share/java/schema-registry/slf4j-api-1.7.25.jar:/usr/bin/../share/java/schema-registry/jersey-media-jaxb-2.27.jar:/usr/bin/../share/java/schema-registry/javax.ws.rs-api-2.1.jar:/usr/bin/../share/java/schema-registry/jackson-core-asl-1.9.13.jar:/usr/bin/../share/java/schema-registry/jose4j-0.6.1.jar:/usr/bin/../share/java/schema-registry/jackson-databind-2.9.6.jar:/usr/bin/../share/java/schema-registry/commons-compress-1.8.1.jar:/usr/bin/../share/java/schema-registry/guava-20.0.jar:/usr/bin/../share/java/schema-registry/xz-1.5.jar:/usr/bin/../share/java/schema-registry/javax.el-api-2.2.4.jar:/usr/bin/../share/java/schema-registry/protobuf-java-3.4.0.jar:/usr/bin/../share/java/schema-registry/kafka_2.11-2.0.0-cp1.jar:/usr/bin/../share/java/schema-registry/paranamer-2.7.jar:/usr/bin/../share/java/schema-registry/classmate-1.0.0.jar:/usr/bin/../share/java/schema-registry/scala-logging_2.11-3.9.0.jar:/usr/bin/../share/java/schema-registry/javax.el-2.2.4.jar:/usr/bin/../share/java/schema-registry/jersey-bean-validation-2.27.jar:/usr/bin/../share/java/schema-registry/zkclient-0.10.jar:/usr/bin/../share/java/schema-registry/slf4j-log4j12-1.7.25.jar:/usr/bin/../share/java/schema-registry/confluent-serializers-new-5.0.0.jar:/usr/bin/../share/java/schema-registry/lz4-java-1.4.1.jar:/usr/bin/../share/java/schema-registry/javax.inject-2.5.0-b42.jar:/usr/bin/../share/java/schema-registry/kafka-clients-2.0.0-cp1.jar:/usr/bin/../share/java/schema-registry/validation-api-1.1.0.Final.jar:/usr/bin/../share/java/schema-registry/scala-library-2.11.12.jar:/usr/bin/../share/java/schema-registry/jersey-client-2.27.jar:/usr/bin/../share/java/schema-registry/common-utils-5.0.0.jar (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,185] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,185] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,185] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,185] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,185] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,185] INFO Client environment:os.version=4.9.125-linuxkit (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,185] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,186] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,186] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,187] INFO Initiating client connection, connectString=zookeeper-service:2181 sessionTimeout=30000 watcher=org.I0Itec.zkclient.ZkClient@1b11171f (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,202] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
[2019-04-12 07:14:19,206] INFO Opening socket connection to server zookeeper-service/10.99.159.163:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2019-04-12 07:14:19,212] INFO Socket connection established to zookeeper-service/10.99.159.163:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-04-12 07:14:19,247] INFO Session establishment complete on server zookeeper-service/10.99.159.163:2181, sessionid = 0x10000026481000c, negotiated timeout = 30000 (org.apache.zookeeper.ClientCnxn)
[2019-04-12 07:14:19,250] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2019-04-12 07:14:19,264] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2019-04-12 07:14:19,424] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2019-04-12 07:14:19,456] INFO Session: 0x10000026481000c closed (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:19,456] INFO EventThread shut down for session: 0x10000026481000c (org.apache.zookeeper.ClientCnxn)
[2019-04-12 07:14:19,456] INFO Initializing KafkaStore with broker endpoints: PLAINTEXT://kafka-service:9092 (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2019-04-12 07:14:19,482] INFO AdminClientConfig values: 
	bootstrap.servers = [PLAINTEXT://kafka-service:9092]
	client.id = 
	connections.max.idle.ms = 300000
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 120000
	retries = 5
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
 (org.apache.kafka.clients.admin.AdminClientConfig)
[2019-04-12 07:14:19,595] WARN The configuration 'connection.url' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
[2019-04-12 07:14:19,599] INFO Kafka version : 2.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-04-12 07:14:19,599] INFO Kafka commitId : 4b1dd33f255ddd2f (org.apache.kafka.common.utils.AppInfoParser)
[2019-04-12 07:14:19,970] INFO Validating schemas topic _schemas (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2019-04-12 07:14:19,977] WARN The replication factor of the schema topic _schemas is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2019-04-12 07:14:20,064] INFO ProducerConfig values: 
	acks = -1
	batch.size = 16384
	bootstrap.servers = [PLAINTEXT://kafka-service:9092]
	buffer.memory = 33554432
	client.id = 
	compression.type = none
	confluent.batch.expiry.ms = 30000
	connections.max.idle.ms = 540000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 0
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
 (org.apache.kafka.clients.producer.ProducerConfig)
[2019-04-12 07:14:20,154] WARN The configuration 'connection.url' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2019-04-12 07:14:20,154] INFO Kafka version : 2.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-04-12 07:14:20,154] INFO Kafka commitId : 4b1dd33f255ddd2f (org.apache.kafka.common.utils.AppInfoParser)
[2019-04-12 07:14:20,174] INFO ConsumerConfig values: 
	auto.commit.interval.ms = 5000
	auto.offset.reset = earliest
	bootstrap.servers = [PLAINTEXT://kafka-service:9092]
	check.crcs = true
	client.id = KafkaStore-reader-_schemas
	connections.max.idle.ms = 540000
	default.api.timeout.ms = 60000
	enable.auto.commit = false
	exclude.internal.topics = true
	fetch.max.bytes = 52428800
	fetch.max.wait.ms = 500
	fetch.min.bytes = 1
	group.id = schema-registry-schema-registry-service-8081
	heartbeat.interval.ms = 3000
	interceptor.classes = []
	internal.leave.group.on.close = true
	isolation.level = read_uncommitted
	key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
	max.partition.fetch.bytes = 1048576
	max.poll.interval.ms = 300000
	max.poll.records = 500
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	session.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
 (org.apache.kafka.clients.consumer.ConsumerConfig)
[2019-04-12 07:14:20,268] WARN The configuration 'connection.url' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig)
[2019-04-12 07:14:20,268] INFO Kafka version : 2.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-04-12 07:14:20,268] INFO Kafka commitId : 4b1dd33f255ddd2f (org.apache.kafka.common.utils.AppInfoParser)
[2019-04-12 07:14:20,292] INFO Cluster ID: _QAvlN5IR6KbD3xswC8PwQ (org.apache.kafka.clients.Metadata)
[2019-04-12 07:14:20,295] INFO Initialized last consumed offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-04-12 07:14:20,296] INFO [kafka-store-reader-thread-_schemas]: Starting (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-04-12 07:14:20,347] INFO [Consumer clientId=KafkaStore-reader-_schemas, groupId=schema-registry-schema-registry-service-8081] Resetting offset for partition _schemas-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher)
[2019-04-12 07:14:20,747] INFO Cluster ID: _QAvlN5IR6KbD3xswC8PwQ (org.apache.kafka.clients.Metadata)
[2019-04-12 07:14:20,863] INFO Wait to catch up until the offset of the last message at 2 (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2019-04-12 07:14:20,950] INFO Joining schema registry with Zookeeper-based coordination (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2019-04-12 07:14:20,958] INFO Initiating client connection, connectString=zookeeper-service:2181 sessionTimeout=30000 watcher=org.I0Itec.zkclient.ZkClient@a4add54 (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:20,958] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2019-04-12 07:14:20,966] INFO Opening socket connection to server zookeeper-service/10.99.159.163:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2019-04-12 07:14:20,966] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
[2019-04-12 07:14:20,967] INFO Socket connection established to zookeeper-service/10.99.159.163:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-04-12 07:14:20,978] INFO Session establishment complete on server zookeeper-service/10.99.159.163:2181, sessionid = 0x10000026481000d, negotiated timeout = 30000 (org.apache.zookeeper.ClientCnxn)
[2019-04-12 07:14:20,979] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2019-04-12 07:14:20,986] INFO Created schema registry namespace zookeeper-service:2181/schema_registry (io.confluent.kafka.schemaregistry.masterelector.zookeeper.ZookeeperMasterElector)
[2019-04-12 07:14:20,986] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2019-04-12 07:14:21,001] INFO Session: 0x10000026481000d closed (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:21,001] INFO EventThread shut down for session: 0x10000026481000d (org.apache.zookeeper.ClientCnxn)
[2019-04-12 07:14:21,008] INFO Initiating client connection, connectString=zookeeper-service:2181/schema_registry sessionTimeout=30000 watcher=org.I0Itec.zkclient.ZkClient@5ba88be8 (org.apache.zookeeper.ZooKeeper)
[2019-04-12 07:14:21,008] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2019-04-12 07:14:21,015] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
[2019-04-12 07:14:21,020] INFO Opening socket connection to server zookeeper-service/10.99.159.163:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2019-04-12 07:14:21,022] INFO Socket connection established to zookeeper-service/10.99.159.163:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-04-12 07:14:21,054] INFO Session establishment complete on server zookeeper-service/10.99.159.163:2181, sessionid = 0x10000026481000e, negotiated timeout = 30000 (org.apache.zookeeper.ClientCnxn)
[2019-04-12 07:14:21,054] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2019-04-12 07:14:21,090] INFO Successfully elected the new master: {"host":"schema-registry-service","port":8081,"master_eligibility":true,"scheme":"http","version":1} (io.confluent.kafka.schemaregistry.masterelector.zookeeper.ZookeeperMasterElector)
[2019-04-12 07:14:21,127] INFO Wait to catch up until the offset of the last message at 3 (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2019-04-12 07:14:21,128] INFO /schema_registry_master exists with value {"host":"schema-registry-service","port":8081,"master_eligibility":true,"scheme":"http","version":1} during connection loss; this is ok (kafka.utils.ZkUtils)
[2019-04-12 07:14:21,129] INFO Successfully elected the new master: {"host":"schema-registry-service","port":8081,"master_eligibility":true,"scheme":"http","version":1} (io.confluent.kafka.schemaregistry.masterelector.zookeeper.ZookeeperMasterElector)
[2019-04-12 07:14:21,360] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.Application)
[2019-04-12 07:14:22,259] INFO jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b01 (org.eclipse.jetty.server.Server)
[2019-04-12 07:14:22,570] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session)
[2019-04-12 07:14:22,570] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session)
[2019-04-12 07:14:22,573] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session)
Apr 12, 2019 7:14:24 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource will be ignored. 
Apr 12, 2019 7:14:24 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource will be ignored. 
Apr 12, 2019 7:14:24 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource will be ignored. 
Apr 12, 2019 7:14:24 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource will be ignored. 
Apr 12, 2019 7:14:24 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource will be ignored. 
[2019-04-12 07:14:24,987] INFO HV000001: Hibernate Validator 5.1.3.Final (org.hibernate.validator.internal.util.Version)
[2019-04-12 07:14:25,780] INFO Started o.e.j.s.ServletContextHandler@3d484181{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
[2019-04-12 07:14:25,868] INFO Started o.e.j.s.ServletContextHandler@53f0a4cb{/ws,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
[2019-04-12 07:14:25,955] INFO Started NetworkTrafficServerConnector@1477089c{HTTP/1.1,[http/1.1]}{0.0.0.0:8081} (org.eclipse.jetty.server.AbstractConnector)
[2019-04-12 07:14:25,956] INFO Started @10409ms (org.eclipse.jetty.server.Server)
[2019-04-12 07:14:25,957] INFO Server started, listening for requests... (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain)

Interestingly the same plugin with same schemas, we were able to run the task and push to the schema registry. This is happening in my local machine, the registry is running locally in my machine.

From the log, i couldn't identify anything related to schema validation at the server side. I suspect it brokes even before that may be at the client APIs.

from schema-registry-plugin.

ImFlog avatar ImFlog commented on May 27, 2024

It seems that your logs are only for boot time.

[2019-04-12 07:14:25,957] INFO Server started, listening for requests... (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain)
Can you try to register the schema and give the line following this one ?

Overall, I think that it's a windows related issue but I would like it to work on every OS :)

from schema-registry-plugin.

ImFlog avatar ImFlog commented on May 27, 2024

@AmalVR can you give an update on this ? Have you been able to fetch the logs when the call is made ?
We already understood that this is a windows related error so i'm not able to reproduce this.
Without news from you in the next days I will close this task.
Thank you :)

from schema-registry-plugin.

AmalVR avatar AmalVR commented on May 27, 2024

@ImFlog, In fact, I tried to collect the logs after trying to register the schema, however, I couldn't find any logs useful after I tried registering the schema.
It seems like the call is not reaching the server and is breaking at the client side while processing the request.
I suspect some problem happening while JSON( RegisterSchemaRequest) processing.

Also, I am not sure if this OS related, because, in a different machine which is windows, we were able to run the registerSchemaTask successfully.

from schema-registry-plugin.

ImFlog avatar ImFlog commented on May 27, 2024

@AmalVR the best way to see what could be wrong would be that you create a Github repository with all your files.
Maybe I missed something.

from schema-registry-plugin.

ImFlog avatar ImFlog commented on May 27, 2024

It's been a month since my last comment.
Any news @AmalVR ? Else I'll close this issue as it seems to be something out of the plugin scope.

from schema-registry-plugin.

AmalVR avatar AmalVR commented on May 27, 2024

from schema-registry-plugin.

ImFlog avatar ImFlog commented on May 27, 2024

My pleasure. Do not hesitates if you have any new issue ;)

from schema-registry-plugin.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.