Giter Club home page Giter Club logo

cdc-vagrant's Introduction

cdc-vagrant

introduce

This project is for experiment of flink-cdc and doris.

CDC(Change Data Capture) is made up of two components, the CDD and the CDT. CDD is stand for Change Data Detection and CDT is stand for Change Data Transfer.

Extract, Load, Transform (ELT) is a data integration process for transferring raw data from a source server to a data system (such as a data warehouse or data lake) on a target server and then preparing the information for downstream uses.

Streaming ETL (Extract, Transform, Load) is the processing and movement of real-time data from one place to another. ETL is short for the database functions extract, transform, and load.

architecture

doris cluster

vm role ip xxx_home
vm111 doris FE(leader) 192.168.56.111 /opt/doris/fe/
vm112 doris FE(observer) 192.168.56.112 /opt/doris/fe/
vm113 doris BE 192.168.56.113 /opt/doris/be/
vm114 doris BE 192.168.56.114 /opt/doris/be/
vm115 doris BE 192.168.56.115 /opt/doris/be/

hdfs cluster and yarn cluster

vm role ip xxx_home
vm116 hdfs: NameNode(active),zkfc, yarn: RM, zookeeper 192.168.56.116 /opt/hadoop
vm117 hdfs: NameNode(standby),zkfc, yarn: RM, zookeeper 192.168.56.117 /opt/hadoop
vm118 hdfs: NameNode(observer),zkfc, yarn: RM, zookeeper 192.168.56.118 /opt/hadoop
vm119 hdfs: DataNode, JournalNode, yarn: NM 192.168.56.119 /opt/hadoop
vm120 hdfs: DataNode, JournalNode, yarn: NM 192.168.56.120 /opt/hadoop
vm121 hdfs: DataNode, JournalNode, yarn: NM 192.168.56.121 /opt/hadoop

flink standalone cluster

minio cluster and flink standalone cluster

Reuse the above virtual machines due to hardware constraints.

vm role ip xxx_home
vm116 docker and compose, minio, sidekick, flink(masters+workers) 192.168.56.116 /opt/flink
vm117 docker and compose, minio, sidekick, flink(masters+workers) 192.168.56.117 /opt/flink
vm118 docker and compose, minio, sidekick, flink(masters+workers) 192.168.56.118 /opt/flink
vm119 docker and compose, minio, sidekick, flink(workers) 192.168.56.119 /opt/flink
vm120 docker and compose, sidekick, flink(workers) 192.168.56.120 /opt/flink
vm121 docker and compose, sidekick, flink(workers) 192.168.56.121 /opt/flink

Usage

HDFS HA

# vm116
# hdfs namenode -format (执行一次)
# hdfs --daemon start namenode (依赖QJM,需启动 hdfs --daemon start journalnode)
# hdfs zkfc -formatZK (执行一次)
# hdfs --daemon start zkfc


# vm117
# hdfs namenode -bootstrapStandby (执行一次)
# hdfs --daemon start namenode (依赖QJM,需启动 hdfs --daemon start journalnode)
# hdfs --daemon start zkfc

# vm118
# hdfs namenode -bootstrapStandby (执行一次)
# hdfs --daemon start namenode (依赖QJM,需启动 hdfs --daemon start journalnode)
# hdfs --daemon start zkfc

# hduser@vm116:~$ hdfs haadmin -getServiceState nn1
# standby
# hduser@vm116:~$ hdfs haadmin -getServiceState nn2
# active
# hduser@vm116:~$ hdfs haadmin -getServiceState nn3
# standby


# vm119 vm120 vm121
# hdfs --daemon start journalnode
# hdfs --daemon start datanode

YARN HA

# yarn --daemon start resourcemanager   //vm116 vm117 vm118
# yarn --daemon start nodemanager       //vm119 vm120 vm121

minio HA

minio server

# vm116 vm117 vm118 vm119
bash /vagrant/scripts/install-docker.sh
bash /vagrant/scripts/install-minio.sh

ref docker-compose.yaml

minio client

curl -o /usr/local/bin/mc -# -fSL https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x /usr/local/bin/mc
mc --help
mc alias set myminio http://localhost:9000 minioadmin minioadmin
# mc admin user svcacct add --access-key "myuserserviceaccount" --secret-key "myuserserviceaccountpassword" myminio minioadmin
mc admin user svcacct add --access-key "u5SybesIDVX9b6Pk" --secret-key "lOpH1v7kdM6H8NkPu1H2R6gLc9jcsmWM" myminio minioadmin

mc

minio load balancer

bash /vagrant/scripts/install-minio-sidekick.sh --port "18000" --sites "http://vm{116...119}:9000"

High Performance HTTP Sidecar Load Balancer

flink standalone HA

# vm116 vm117 vm118 vm119
bash /vagrant/scripts/install-flink.sh
# https://blog.csdn.net/hiliang521/article/details/126860098

su -l hduser
## start-cluster
start-cluster.sh
## stop-cluster
stop-cluster.sh
## 
jobmanager.sh start
##
taskmanager.sh start
flink run /opt/flink/examples/streaming/WordCount.jar  --input /opt/flink/conf/flink-conf.yaml

flink cdc

this is an experimental environment of 基于 Flink CDC 构建 MySQL 和 Postgres 的 Streaming ETL .

The difference is that high availability of flink standalone and Shanghai time zone is used.

mysql

docker compose exec mysql mysql -uroot -p123456
SET GLOBAL time_zone = '+8:00';
flush privileges;
SHOW VARIABLES LIKE '%time_zone%';
-- MySQL
CREATE DATABASE mydb;
USE mydb;
CREATE TABLE products (
  id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,
  name VARCHAR(255) NOT NULL,
  description VARCHAR(512)
);
ALTER TABLE products AUTO_INCREMENT = 101;
INSERT INTO products
VALUES (default,"scooter","Small 2-wheel scooter"),
       (default,"car battery","12V car battery"),
       (default,"12-pack drill bits","12-pack of drill bits with sizes ranging from #40 to #3"),
       (default,"hammer","12oz carpenter's hammer"),
       (default,"hammer","14oz carpenter's hammer"),
       (default,"hammer","16oz carpenter's hammer"),
       (default,"rocks","box of assorted rocks"),
       (default,"jacket","water resistent black wind breaker"),
       (default,"spare tire","24 inch spare tire");

CREATE TABLE orders (
  order_id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,
  order_date DATETIME NOT NULL,
  customer_name VARCHAR(255) NOT NULL,
  price DECIMAL(10, 5) NOT NULL,
  product_id INTEGER NOT NULL,
  order_status BOOLEAN NOT NULL -- Whether order has been placed
) AUTO_INCREMENT = 10001;

INSERT INTO orders
VALUES (default, '2020-07-30 10:08:22', 'Jark', 50.50, 102, false),
       (default, '2020-07-30 10:11:09', 'Sally', 15.00, 105, false),
       (default, '2020-07-30 12:00:30', 'Edward', 25.25, 106, false);

postgres

docker compose exec postgres psql -h localhost -U postgres
CREATE TABLE shipments (
  shipment_id SERIAL NOT NULL PRIMARY KEY,
  order_id SERIAL NOT NULL,
  origin VARCHAR(255) NOT NULL,
  destination VARCHAR(255) NOT NULL,
  is_arrived BOOLEAN NOT NULL
);
ALTER SEQUENCE public.shipments_shipment_id_seq RESTART WITH 1001;
ALTER TABLE public.shipments REPLICA IDENTITY FULL;
INSERT INTO shipments
VALUES (default,10001,'Beijing','Shanghai',false),
       (default,10002,'Hangzhou','Shanghai',false),
       (default,10003,'Shanghai','Hangzhou',false);

cdc to es

sql-client.sh

enable checkpoints every 3 seconds.

SET execution.checkpointing.interval = 3s;
CREATE TABLE products (
    id INT,
    name STRING,
    description STRING,
    PRIMARY KEY (id) NOT ENFORCED
  ) WITH (
    'connector' = 'mysql-cdc',
    'hostname' = '192.168.56.116',
    'port' = '3306',
    'username' = 'root',
    'password' = '123456',
    'database-name' = 'mydb',
    'table-name' = 'products'
  );

CREATE TABLE orders (
   order_id INT,
   order_date TIMESTAMP(0),
   customer_name STRING,
   price DECIMAL(10, 5),
   product_id INT,
   order_status BOOLEAN,
   PRIMARY KEY (order_id) NOT ENFORCED
 ) WITH (
   'connector' = 'mysql-cdc',
   'hostname' = '192.168.56.116',
   'port' = '3306',
   'username' = 'root',
   'password' = '123456',
   'database-name' = 'mydb',
   'table-name' = 'orders'
 );

CREATE TABLE shipments (
   shipment_id INT,
   order_id INT,
   origin STRING,
   destination STRING,
   is_arrived BOOLEAN,
   PRIMARY KEY (shipment_id) NOT ENFORCED
 ) WITH (
   'connector' = 'postgres-cdc',
   'hostname' = '192.168.56.116',
   'port' = '5432',
   'username' = 'postgres',
   'password' = 'postgres',
   'database-name' = 'postgres',
   'schema-name' = 'public',
   'table-name' = 'shipments'
 );


 CREATE TABLE enriched_orders (
   order_id INT,
   order_date TIMESTAMP(0),
   customer_name STRING,
   price DECIMAL(10, 5),
   product_id INT,
   order_status BOOLEAN,
   product_name STRING,
   product_description STRING,
   shipment_id INT,
   origin STRING,
   destination STRING,
   is_arrived BOOLEAN,
   PRIMARY KEY (order_id) NOT ENFORCED
 ) WITH (
     'connector' = 'elasticsearch-7',
     'hosts' = 'http://192.168.56.116:9200',
     'index' = 'enriched_orders'
 );

 INSERT INTO enriched_orders
 SELECT o.*, p.name, p.description, s.shipment_id, s.origin, s.destination, s.is_arrived
 FROM orders AS o
 LEFT JOIN products AS p ON o.product_id = p.id
 LEFT JOIN shipments AS s ON o.order_id = s.order_id;

Principle explanation

create source tables that capture the change data from the corresponding database tables. create slink table that is used to load data to the Elasticsearch. select source table into slink talbe to write to the Elasticsearch.

mysql additional test

INSERT INTO orders VALUES (default, '2022-07-30 10:08:22', 'dddd', 666, 105, false);
INSERT INTO orders VALUES (default, '2022-07-30 10:08:22', 'tttt', 888, 105, false);

cdc to doris

create doris database

mysql  -h 192.168.56.111 -P9030 -uroot
CREATE DATABASE IF NOT EXISTS db;


CREATE TABLE db.`test_sink` (
  `id` INT,
  `name` STRING
) ENGINE=OLAP COMMENT "OLAP" 
DISTRIBUTED BY HASH(`id`) BUCKETS 3;

sql-client.sh

enable checkpoints every 3 seconds.

SET execution.checkpointing.interval = 3s;
CREATE TABLE cdc_test_source (
    id INT,
    name STRING,
    description STRING,
    PRIMARY KEY (id) NOT ENFORCED
  ) WITH (
    'connector' = 'mysql-cdc',
    'hostname' = '192.168.56.116',
    'port' = '3306',
    'username' = 'root',
    'password' = '123456',
    'database-name' = 'mydb',
    'table-name' = 'products'
  );



CREATE TABLE doris_test_sink (
id INT,
name STRING
) WITH (
  'connector' = 'doris',
  'fenodes' = '192.168.56.111:8030',
  'table.identifier' = 'db.test_sink',
  'username' = 'root',
  'password' = '',
  'sink.label-prefix' = 'doris_label',
  'sink.properties.format' = 'json',
  'sink.properties.read_json_by_line' = 'true'
);

INSERT INTO doris_test_sink select id,name from cdc_test_source;

https://github.com/apache/doris/blob/master/samples/doris-demo/flink-demo-v1.1

ref

cdc-vagrant's People

Contributors

dyrnq avatar

Watchers

 avatar

cdc-vagrant's Issues

ERROR org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted

2022-12-12 17:37:40,067 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager: Initialized queue: root.default
2022-12-12 17:37:40,067 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager: Initialized queue: root
2022-12-12 17:37:40,071 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager: Initialized root queue root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>usedCapacity=0.0, numApps=0, numContainers=0
2022-12-12 17:37:40,073 INFO org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule: Initialized queue mappings, override: false
2022-12-12 17:37:40,073 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.WorkflowPriorityMappingsManager: Initialized workflow priority mappings, override: false
2022-12-12 17:37:40,074 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.MultiNodeSortingManager: MultiNode scheduling is 'false', and configured policies are 
2022-12-12 17:37:40,074 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized CapacityScheduler with calculator=class org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator, minimumAllocation=<<memory:1024, vCores:1>>, maximumAllocation=<<memory:8192, vCores:4>>, asynchronousScheduling=false, asyncScheduleInterval=5ms,multiNodePlacementEnabled=false, assignMultipleEnabled=true, maxAssignPerHeartbeat=100, offswitchPerHeartbeatLimit=1
2022-12-12 17:37:40,081 INFO org.apache.hadoop.conf.Configuration: dynamic-resources.xml not found
2022-12-12 17:37:40,086 INFO org.apache.hadoop.yarn.server.resourcemanager.AMSProcessingChain: Initializing AMS Processing chain. Root Processor=[org.apache.hadoop.yarn.server.resourcemanager.DefaultAMSProcessor].
2022-12-12 17:37:40,086 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: disabled placement handler will be used, all scheduling requests will be rejected.
2022-12-12 17:37:40,087 INFO org.apache.hadoop.yarn.server.resourcemanager.AMSProcessingChain: Adding [org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.processor.DisabledPlacementProcessor] tp top of AMS Processing chain. 
2022-12-12 17:37:40,104 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: TimelineServicePublisher is not configured
2022-12-12 17:37:40,220 INFO org.eclipse.jetty.util.log: Logging initialized @2697ms to org.eclipse.jetty.util.log.Slf4jLog
2022-12-12 17:37:40,603 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/hduser/hadoop-http-auth-signature-secret
2022-12-12 17:37:40,608 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.resourcemanager is not defined
2022-12-12 17:37:40,620 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2022-12-12 17:37:40,626 INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context cluster
2022-12-12 17:37:40,633 INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context logs
2022-12-12 17:37:40,633 INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context static
2022-12-12 17:37:40,634 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context cluster
2022-12-12 17:37:40,634 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2022-12-12 17:37:40,634 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2022-12-12 17:37:41,697 INFO org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
2022-12-12 17:37:41,715 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 8088
2022-12-12 17:37:41,719 INFO org.eclipse.jetty.server.Server: jetty-9.4.43.v20210629; built: 2021-06-30T11:07:22.254Z; git: 526006ecfa3af7f1a27ef3a288e2bef7ea9dd7e8; jvm 1.8.0_352-b08
2022-12-12 17:37:41,903 INFO org.eclipse.jetty.server.session: DefaultSessionIdManager workerName=node0
2022-12-12 17:37:41,903 INFO org.eclipse.jetty.server.session: No SessionScavenger set, using defaults
2022-12-12 17:37:41,915 INFO org.eclipse.jetty.server.session: node0 Scavenging every 660000ms
2022-12-12 17:37:41,984 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/hduser/hadoop-http-auth-signature-secret
2022-12-12 17:37:41,996 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2022-12-12 17:37:42,004 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
2022-12-12 17:37:42,009 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2022-12-12 17:37:42,088 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@53bd8fca{logs,/logs,file:///opt/hadoop/logs/,AVAILABLE}
2022-12-12 17:37:42,089 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@63a270c9{static,/static,jar:file:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.3.3.jar!/webapps/static,AVAILABLE}
2022-12-12 17:37:47,506 ERROR org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: RECEIVED SIGNAL 1: SIGHUP
2022-12-12 17:37:49,057 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext@64984b0f{cluster,/,file:///tmp/jetty-hadoop1-8088-hadoop-yarn-common-3_3_3_jar-_-any-6938409273500055326/webapp/,AVAILABLE}{jar:file:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.3.3.jar!/webapps/cluster}
2022-12-12 17:37:49,086 INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector@67c2e933{HTTP/1.1, (http/1.1)}{hadoop1:8088}
2022-12-12 17:37:49,086 INFO org.eclipse.jetty.server.Server: Started @11564ms
2022-12-12 17:37:49,087 INFO org.apache.hadoop.yarn.webapp.WebApps: Web app cluster started at 8088
2022-12-12 17:37:49,881 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 100, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
2022-12-12 17:37:50,048 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8033
2022-12-12 17:37:50,725 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceManagerAdministrationProtocolPB to the server
2022-12-12 17:37:50,748 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning to active state
2022-12-12 17:37:50,730 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2022-12-12 17:37:50,758 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8033: starting
2022-12-12 17:37:50,815 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating AMRMToken
2022-12-12 17:37:50,815 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager: Rolling master-key for container-tokens
2022-12-12 17:37:50,815 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Rolling master-key for nm-tokens
2022-12-12 17:37:50,815 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2022-12-12 17:37:50,815 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: storing master key with keyID 1
2022-12-12 17:37:50,815 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing RMDTMasterKey.
2022-12-12 17:37:50,842 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
2022-12-12 17:37:50,842 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2022-12-12 17:37:50,842 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: storing master key with keyID 2
2022-12-12 17:37:50,842 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing RMDTMasterKey.
2022-12-12 17:37:50,919 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.nodelabels.event.NodeLabelsStoreEventType for class org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager$ForwardingEventHandler
2022-12-12 17:37:51,809 INFO org.apache.hadoop.yarn.nodelabels.store.AbstractFSNodeStore: Created store directory :file:/tmp/hadoop-yarn-hduser/node-attribute
2022-12-12 17:37:52,198 INFO org.apache.hadoop.yarn.nodelabels.store.AbstractFSNodeStore: Finished write mirror at:file:/tmp/hadoop-yarn-hduser/node-attribute/nodeattribute.mirror
2022-12-12 17:37:52,198 INFO org.apache.hadoop.yarn.nodelabels.store.AbstractFSNodeStore: Finished create editlog file at:file:/tmp/hadoop-yarn-hduser/node-attribute/nodeattribute.editlog
2022-12-12 17:37:52,317 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.nodelabels.NodeAttributesStoreEventType for class org.apache.hadoop.yarn.server.resourcemanager.nodelabels.NodeAttributesManagerImpl$ForwardingEventHandler
2022-12-12 17:37:52,320 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.MultiNodeSortingManager: Starting NodeSortingService=MultiNodeSortingManager
2022-12-12 17:37:52,435 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 5000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
2022-12-12 17:37:52,445 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8031
2022-12-12 17:37:52,475 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceTrackerPB to the server
2022-12-12 17:37:52,476 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8031: starting
2022-12-12 17:37:52,476 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2022-12-12 17:37:52,637 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor
2022-12-12 17:37:52,709 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 5000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
2022-12-12 17:37:52,725 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8030
2022-12-12 17:37:52,734 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB to the server
2022-12-12 17:37:52,735 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8030: starting
2022-12-12 17:37:52,781 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2022-12-12 17:37:53,355 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 5000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
2022-12-12 17:37:53,367 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationClientProtocolPB to the server
2022-12-12 17:37:53,375 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8032: starting
2022-12-12 17:37:53,376 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8032
2022-12-12 17:37:53,379 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2022-12-12 17:37:54,354 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: NodeManager from node hadoop4(cmPort: 44399 httpPort: 8042) registered with capability: <memory:8192, vCores:8>, assigned nodeId hadoop4:44399
2022-12-12 17:37:54,354 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: NodeManager from node hadoop2(cmPort: 44199 httpPort: 8042) registered with capability: <memory:8192, vCores:8>, assigned nodeId hadoop2:44199
2022-12-12 17:37:54,354 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: NodeManager from node hadoop3(cmPort: 35453 httpPort: 8042) registered with capability: <memory:8192, vCores:8>, assigned nodeId hadoop3:35453
2022-12-12 17:37:54,361 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: hadoop4:44399 Node Transitioned from NEW to RUNNING
2022-12-12 17:37:54,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: hadoop3:35453 Node Transitioned from NEW to RUNNING
2022-12-12 17:37:54,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: hadoop2:44199 Node Transitioned from NEW to RUNNING
2022-12-12 17:37:54,433 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added node hadoop4:44399 clusterResource: <memory:8192, vCores:8>
2022-12-12 17:37:54,446 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added node hadoop3:35453 clusterResource: <memory:16384, vCores:16>
2022-12-12 17:37:54,460 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added node hadoop2:44199 clusterResource: <memory:24576, vCores:24>
2022-12-12 17:37:54,917 INFO org.apache.hadoop.yarn.server.webproxy.ProxyCA: Created Certificate for OU=YARN-f8cdf15e-f7e8-4d9f-915a-b284670ec109
2022-12-12 17:37:55,034 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing CA Certificate and Private Key
2022-12-12 17:37:55,034 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to active state
2022-12-12 17:37:55,042 ERROR org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted
2022-12-12 17:37:55,047 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.w.WebAppContext@64984b0f{cluster,/,null,STOPPED}{jar:file:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.3.3.jar!/webapps/cluster}
2022-12-12 17:37:55,054 INFO org.eclipse.jetty.server.AbstractConnector: Stopped ServerConnector@67c2e933{HTTP/1.1, (http/1.1)}{hadoop1:8088}
2022-12-12 17:37:55,054 INFO org.eclipse.jetty.server.session: node0 Stopped scavenging
2022-12-12 17:37:55,055 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@63a270c9{static,/static,jar:file:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.3.3.jar!/webapps/static,STOPPED}
2022-12-12 17:37:55,055 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@53bd8fca{logs,/logs,file:///opt/hadoop/logs/,STOPPED}
2022-12-12 17:37:55,058 INFO org.apache.hadoop.ipc.Server: Stopping server on 8032
2022-12-12 17:37:55,066 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8032
2022-12-12 17:37:55,066 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2022-12-12 17:37:55,070 INFO org.apache.hadoop.ipc.Server: Stopping server on 8033
2022-12-12 17:37:55,070 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8033
2022-12-12 17:37:55,071 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2022-12-12 17:37:55,072 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning to standby state
2022-12-12 17:37:55,072 WARN org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher: org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher$LauncherThread interrupted. Returning.
2022-12-12 17:37:55,073 INFO org.apache.hadoop.ipc.Server: Stopping server on 8030
2022-12-12 17:37:55,075 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8030
2022-12-12 17:37:55,076 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2022-12-12 17:37:55,078 INFO org.apache.hadoop.ipc.Server: Stopping server on 8031
2022-12-12 17:37:55,080 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8031
2022-12-12 17:37:55,083 INFO org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: NMLivelinessMonitor thread interrupted
2022-12-12 17:37:55,083 ERROR org.apache.hadoop.yarn.event.EventDispatcher: Returning, interrupted : java.lang.InterruptedException
2022-12-12 17:37:55,085 ERROR org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Returning, interrupted : java.lang.InterruptedException: sleep interrupted
2022-12-12 17:37:55,085 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2022-12-12 17:37:55,086 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.activities.ActivitiesManager: org.apache.hadoop.yarn.server.resourcemanager.scheduler.activities.ActivitiesManager thread interrupted
2022-12-12 17:37:55,090 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: AsyncDispatcher is draining to stop, ignoring any new events.
2022-12-12 17:37:55,098 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: AsyncDispatcher is draining to stop, ignoring any new events.
2022-12-12 17:37:55,108 INFO org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: AMLivelinessMonitor thread interrupted
2022-12-12 17:37:55,108 INFO org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: org.apache.hadoop.yarn.server.resourcemanager.rmapp.monitor.RMAppLifetimeMonitor thread interrupted
2022-12-12 17:37:55,108 ERROR org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted
2022-12-12 17:37:55,108 INFO org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAllocationExpirer thread interrupted
2022-12-12 17:37:55,109 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping ResourceManager metrics system...
2022-12-12 17:37:55,108 INFO org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: AMLivelinessMonitor thread interrupted
2022-12-12 17:37:55,116 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ResourceManager metrics system stopped.
2022-12-12 17:37:55,116 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ResourceManager metrics system shutdown complete.
2022-12-12 17:37:55,116 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: AsyncDispatcher is draining to stop, ignoring any new events.
2022-12-12 17:37:55,117 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to standby state
2022-12-12 17:37:55,117 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down ResourceManager at hadoop1/172.21.0.21
************************************************************/

org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot instantiate user function.

2022-12-07 18:12:51
org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot instantiate user function.
	at org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:399)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOperator(OperatorChain.java:763)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOperatorChain(OperatorChain.java:736)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:676)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOperatorChain(OperatorChain.java:726)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:676)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOperatorChain(OperatorChain.java:726)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:676)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOperatorChain(OperatorChain.java:726)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:676)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain.<init>(OperatorChain.java:195)
	at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.<init>(RegularOperatorChain.java:60)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:681)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:669)
	at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
	at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904)
	at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.io.StreamCorruptedException: unexpected block data
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1704)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2346)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2346)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2346)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2346)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2346)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:489)
	at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:447)
	at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:617)
	at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:602)
	at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:589)
	at org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:543)
	at org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:383)
	... 18 more

ERROR 1105 (HY000): errCode = 2, detailMessage = Failed to find 3 backends for policy: cluster|query|load|schedule|tags|medium: default_cluster|false|false|true|[{"location" : "default"}]|HDD

mysql  -h 192.168.56.111 -P9030 -uroot
CREATE DATABASE IF NOT EXISTS db;


CREATE TABLE db.`table` (
  `tag` bigint(20) NULL COMMENT "user tag",
  `hid` smallint(6) NULL COMMENT "Bucket ID",
  `user_id` bitmap BITMAP_UNION NULL COMMENT ""
) ENGINE=OLAP
AGGREGATE KEY(`tag`, `hid`)
COMMENT "OLAP"
DISTRIBUTED BY HASH(`hid`) BUCKETS 3
ERROR 1105 (HY000): errCode = 2, detailMessage = Failed to find 3 backends for policy: cluster|query|load|schedule|tags|medium: default_cluster|false|false|true|[{"location" : "default"}]|HDD

hive ./schematool -initSchema -dbType postgres

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:	 jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver :	 org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User:	 APP
Starting metastore schema initialization to 3.1.0
Initialization script hive-schema-3.1.0.postgres.sql


Error: Syntax error: Encountered "statement_timeout" at line 1, column 5. (state=42X01,code=30000)
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
Underlying cause: java.io.IOException : Schema script failed, errorcode 2
Use --verbose for detailed stacktrace.
*** schemaTool failed ***

Execution failed: Error Failed to execute sql: java.sql.SQLException: (conn=17) errCode = 2, detailMessage = It is highly NOT RECOMMENDED to use DROP BACKEND stmt.It is not safe to directly drop a backend. All data on this backend will be discarded permanently. If you insist, use DROPP BACKEND stmt (double P).

ALTER SYSTEM DROP BACKEND '192.168.56.115:9050';
Execution failed: Error Failed to execute sql: java.sql.SQLException: (conn=17) errCode = 2, detailMessage = It is highly NOT RECOMMENDED to use DROP BACKEND stmt.It is not safe to directly drop a backend. All data on this backend will be discarded permanently. If you insist, use DROPP BACKEND stmt (double P).

sudo: hduser : user NOT in sudoers ; TTY=pts/0 ; PWD=/var/log ; USER=root ; COMMAND=ttail -f auth.log

Nov 29 12:09:20 ubuntu-focal su: pam_unix(su-l:session): session closed for user root
Nov 29 12:09:23 ubuntu-focal su: (to hduser) vagrant on pts/0
Nov 29 12:09:23 ubuntu-focal su: pam_unix(su-l:session): session opened for user hduser by vagrant(uid=0)
Nov 29 12:16:04 ubuntu-focal sshd[31204]: Connection closed by authenticating user hduser 192.168.56.116 port 51156 [preauth]
Nov 29 12:17:01 ubuntu-focal CRON[31252]: pam_unix(cron:session): session opened for user root by (uid=0)
Nov 29 12:17:01 ubuntu-focal CRON[31252]: pam_unix(cron:session): session closed for user root
Nov 29 12:18:59 ubuntu-focal sshd[31347]: Connection closed by authenticating user hduser 192.168.56.116 port 52756 [preauth]
Nov 29 12:19:50 ubuntu-focal sudo:   hduser : user NOT in sudoers ; TTY=pts/0 ; PWD=/var/log ; USER=root ; COMMAND=ttail -f auth.log
Nov 29 12:20:00 ubuntu-focal su: (to root) vagrant on pts/0
Nov 29 12:20:00 ubuntu-focal su: pam_unix(su-l:session): session opened for user root by vagrant(uid=4000)
Nov 29 12:21:46 ubuntu-focal sshd[31519]: Connection closed by authenticating user hduser 192.168.56.116 port 51458 [preauth]

org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /data/hadoop/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.

2022-11-30 08:46:39,164 WARN common.Storage: Storage directory /data/hadoop/dfs/name does not exist
2022-11-30 08:46:39,167 WARN namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /data/hadoop/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:392)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:243)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1201)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:779)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:681)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:768)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1020)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:995)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1769)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1834)
2022-11-30 08:46:39,177 INFO handler.ContextHandler: Stopped o.e.j.w.WebAppContext@372ea2bc{hdfs,/,null,STOPPED}{file:/opt/hadoop/share/hadoop/hdfs/webapps/hdfs}
2022-11-30 08:46:39,182 INFO server.AbstractConnector: Stopped ServerConnector@29df4d43{HTTP/1.1, (http/1.1)}{vm117:9870}
2022-11-30 08:46:39,182 INFO server.session: node0 Stopped scavenging
2022-11-30 08:46:39,184 INFO handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@7bb6ab3a{static,/static,file:///opt/hadoop/share/hadoop/hdfs/webapps/static/,STOPPED}
2022-11-30 08:46:39,184 INFO handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@5733f295{logs,/logs,file:///opt/hadoop/logs/,STOPPED}
2022-11-30 08:46:39,186 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system...
2022-11-30 08:46:39,187 INFO impl.MetricsSystemImpl: NameNode metrics system stopped.
2022-11-30 08:46:39,188 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2022-11-30 08:46:39,188 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /data/hadoop/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:392)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:243)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1201)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:779)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:681)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:768)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1020)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:995)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1769)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1834)
2022-11-30 08:46:39,192 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /data/hadoop/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
2022-11-30 08:46:39,199 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at vm117/192.168.56.117
************************************************************/

Exception in thread "main" java.lang.RuntimeException: com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character '=' (code 61); expected a semi-colon after the reference for entity 'useSSL'

Exception in thread "main" java.lang.RuntimeException: com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character '=' (code 61); expected a semi-colon after the reference for entity 'useSSL'
 at [row,col,system-id]: [22,87,"file:/opt/hive/conf/hive-site.xml"]
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3092)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:3041)
	at org.apache.hadoop.conf.Configuration.loadProps(Configuration.java:2914)
	at org.apache.hadoop.conf.Configuration.addResourceObject(Configuration.java:1034)
	at org.apache.hadoop.conf.Configuration.addResource(Configuration.java:939)
	at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5154)
	at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:5107)
	at org.apache.hive.beeline.HiveSchemaTool.<init>(HiveSchemaTool.java:96)
	at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1473)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character '=' (code 61); expected a semi-colon after the reference for entity 'useSSL'
 at [row,col,system-id]: [22,87,"file:/opt/hive/conf/hive-site.xml"]
	at com.ctc.wstx.sr.StreamScanner.throwUnexpectedChar(StreamScanner.java:666)
	at com.ctc.wstx.sr.StreamScanner.parseEntityName(StreamScanner.java:2080)
	at com.ctc.wstx.sr.StreamScanner.fullyResolveEntity(StreamScanner.java:1538)
	at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2818)
	at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1121)
	at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3396)
	at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3182)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3075)
	... 14 more
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to load driver
Underlying cause: java.lang.ClassNotFoundException : com.mysql.jdbc.Driver
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to load driver
	at org.apache.hadoop.hive.metastore.tools.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:97)
	at org.apache.hive.beeline.HiveSchemaTool.getConnectionToMetastore(HiveSchemaTool.java:169)
	at org.apache.hive.beeline.HiveSchemaTool.testConnectionToMetastore(HiveSchemaTool.java:475)
	at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:581)
	at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:567)
	at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1517)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
	at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:264)
	at org.apache.hadoop.hive.metastore.tools.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:85)
	... 11 more

Exception in thread "main" org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby.

Exception in thread "main" org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:108)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2094)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1550)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3342)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1208)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:1042)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
	at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
	at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)

	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1584)
	at org.apache.hadoop.ipc.Client.call(Client.java:1530)
	at org.apache.hadoop.ipc.Client.call(Client.java:1427)
	at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
	at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)
	at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:966)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
	at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1739)
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1753)
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1750)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1765)
	at org.apache.hadoop.fs.FileSystem.isFile(FileSystem.java:1865)
	at org.example.HdfsCheck.main(HdfsCheck.java:22)

进程已结束,退出代码1

java.lang.IllegalArgumentException: Unable to construct journal, qjournal://vm119:8485;vm120:8485;vm121:8485/mycluster

java.lang.IllegalArgumentException: Unable to construct journal, qjournal://vm119:8485;vm120:8485;vm121:8485/mycluster
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1877)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:299)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:264)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1257)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1726)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1834)
Caused by: java.lang.reflect.InvocationTargetException
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1864)
	... 5 more
Caused by: java.net.UnknownHostException: vm119:8485
	at org.apache.hadoop.hdfs.server.common.Util.getAddressesList(Util.java:378)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:417)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:199)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.<init>(QuorumJournalManager.java:147)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.<init>(QuorumJournalManager.java:126)
	... 10 more
2022-11-29 23:34:05,837 INFO util.ExitUtil: Exiting with status 1: java.lang.IllegalArgumentException: Unable to construct journal, qjournal://vm119:8485;vm120:8485;vm121:8485/mycluster
2022-11-29 23:34:05,845 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at vm116/192.168.56.116
************************************************************/


Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.4.jar wordcount /example/input/my_wordcount.txt /example/out/my_wordcount
bash: warning: setlocale: LC_ALL: cannot change locale (en_US.utf-8)
2023-04-04 14:07:18,273 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1680587882463_0002
2023-04-04 14:07:18,625 INFO input.FileInputFormat: Total input files to process : 1
2023-04-04 14:07:18,837 INFO mapreduce.JobSubmitter: number of splits:1
2023-04-04 14:07:19,067 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1680587882463_0002
2023-04-04 14:07:19,069 INFO mapreduce.JobSubmitter: Executing with tokens: []
2023-04-04 14:07:19,290 INFO conf.Configuration: resource-types.xml not found
2023-04-04 14:07:19,290 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2023-04-04 14:07:19,567 INFO impl.YarnClientImpl: Submitted application application_1680587882463_0002
2023-04-04 14:07:19,634 INFO mapreduce.Job: The url to track the job: http://m211:8088/proxy/application_1680587882463_0002/
2023-04-04 14:07:19,635 INFO mapreduce.Job: Running job: job_1680587882463_0002
2023-04-04 14:07:25,707 INFO mapreduce.Job: Job job_1680587882463_0002 running in uber mode : false
2023-04-04 14:07:25,709 INFO mapreduce.Job:  map 0% reduce 0%
2023-04-04 14:07:25,724 INFO mapreduce.Job: Job job_1680587882463_0002 failed with state FAILED due to: Application application_1680587882463_0002 failed 2 times due to AM Container for appattempt_1680587882463_0002_000002 exited with  exitCode: 1
Failing this attempt.Diagnostics: [2023-04-04 14:07:24.827]Exception from container-launch.
Container id: container_1680587882463_0002_02_000001
Exit code: 1

[2023-04-04 14:07:24.877]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

Please check whether your <HADOOP_HOME>/etc/hadoop/mapred-site.xml contains the below configuration:
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>

[2023-04-04 14:07:24.878]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

Please check whether your <HADOOP_HOME>/etc/hadoop/mapred-site.xml contains the below configuration:
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>

For more detailed output, check the application tracking page: http://m211:8088/cluster/app/application_1680587882463_0002 Then click on links to logs of each attempt.
. Failing the application.
2023-04-04 14:07:25,750 INFO mapreduce.Job: Counters: 0

Execution failed: Error Failed to execute sql: java.sql.SQLNonTransientConnectionException: (conn=10) Socket error

2022-11-28 16:36:55,130 WARN (heartbeat mgr|21) [HeartbeatMgr.runAfterCatalogReady():139] get bad heartbeat response: type: BACKEND, status: BAD, msg: java.net.NoRouteToHostException: No route to host (Host unreachable), beId: 10003, bePort: 0, httpPort: 0, brpcPort: 0
2022-11-28 16:36:55,147 WARN (heartbeat mgr|21) [HeartbeatMgr.runAfterCatalogReady():139] get bad heartbeat response: type: FRONTEND, status: BAD, msg: java.net.ConnectException: Connection refused (Connection refused), name: 192.168.56.112_9010_1669624263304, version: null, queryPort: 0, rpcPort: 0, replayedJournalId: 0
2022-11-28 16:36:55,233 WARN (qtp1138107948-119|119) [ExceptionHandlerExceptionResolver.doResolveHandlerMethodException():434] Failure in @ExceptionHandler org.apache.doris.httpv2.exception.RestApiExceptionHandler#unexpectedExceptionHandler(Exception)
org.eclipse.jetty.io.EofException: Closed
	at org.eclipse.jetty.server.HttpOutput.checkWritable(HttpOutput.java:771) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:795) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.springframework.util.StreamUtils$NonClosingOutputStream.write(StreamUtils.java:287) ~[spring-core-5.3.22.jar:5.3.22]
	at com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:2171) ~[jackson-core-2.13.3.jar:2.13.3]
	at com.fasterxml.jackson.core.json.UTF8JsonGenerator.flush(UTF8JsonGenerator.java:1184) ~[jackson-core-2.13.3.jar:2.13.3]
	at com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:1008) ~[jackson-databind-2.12.1.jar:2.12.1]
	at org.springframework.http.converter.json.AbstractJackson2HttpMessageConverter.writeInternal(AbstractJackson2HttpMessageConverter.java:456) ~[spring-web-5.3.22.jar:5.3.22]
	at org.springframework.http.converter.AbstractGenericHttpMessageConverter.write(AbstractGenericHttpMessageConverter.java:104) ~[spring-web-5.3.22.jar:5.3.22]
	at org.springframework.web.servlet.mvc.method.annotation.AbstractMessageConverterMethodProcessor.writeWithMessageConverters(AbstractMessageConverterMethodProcessor.java:290) ~[spring-webmvc-5.3.22.jar:5.3.22]
	at org.springframework.web.servlet.mvc.method.annotation.HttpEntityMethodProcessor.handleReturnValue(HttpEntityMethodProcessor.java:219) ~[spring-webmvc-5.3.22.jar:5.3.22]
	at org.springframework.web.method.support.HandlerMethodReturnValueHandlerComposite.handleReturnValue(HandlerMethodReturnValueHandlerComposite.java:78) ~[spring-web-5.3.22.jar:5.3.22]
	at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:135) ~[spring-webmvc-5.3.22.jar:5.3.22]
	at org.springframework.web.servlet.mvc.method.annotation.ExceptionHandlerExceptionResolver.doResolveHandlerMethodException(ExceptionHandlerExceptionResolver.java:428) ~[spring-webmvc-5.3.22.jar:5.3.22]
	at org.springframework.web.servlet.handler.AbstractHandlerMethodExceptionResolver.doResolveException(AbstractHandlerMethodExceptionResolver.java:75) ~[spring-webmvc-5.3.22.jar:5.3.22]
	at org.springframework.web.servlet.handler.AbstractHandlerExceptionResolver.resolveException(AbstractHandlerExceptionResolver.java:142) ~[spring-webmvc-5.3.22.jar:5.3.22]
	at org.springframework.web.servlet.handler.HandlerExceptionResolverComposite.resolveException(HandlerExceptionResolverComposite.java:80) ~[spring-webmvc-5.3.22.jar:5.3.22]
	at org.springframework.web.servlet.DispatcherServlet.processHandlerException(DispatcherServlet.java:1330) ~[spring-webmvc-5.3.22.jar:5.3.22]
	at org.springframework.web.servlet.DispatcherServlet.processDispatchResult(DispatcherServlet.java:1141) ~[spring-webmvc-5.3.22.jar:5.3.22]
	at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1087) ~[spring-webmvc-5.3.22.jar:5.3.22]
	at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963) ~[spring-webmvc-5.3.22.jar:5.3.22]
	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) ~[spring-webmvc-5.3.22.jar:5.3.22]
	at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898) ~[spring-webmvc-5.3.22.jar:5.3.22]
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) ~[javax.servlet-api-3.1.0.jar:3.1.0]
	at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) ~[spring-webmvc-5.3.22.jar:5.3.22]
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) ~[javax.servlet-api-3.1.0.jar:3.1.0]
	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1656) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.websocket.server.WebSocketUpgradeFilter.doFilter(WebSocketUpgradeFilter.java:292) ~[websocket-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.apache.doris.httpv2.interceptor.ServletTraceIterceptor.doFilter(ServletTraceIterceptor.java:54) ~[doris-fe.jar:1.0-SNAPSHOT]
	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:201) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-5.3.22.jar:5.3.22]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.22.jar:5.3.22]
	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-5.3.22.jar:5.3.22]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.22.jar:5.3.22]
	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-5.3.22.jar:5.3.22]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.22.jar:5.3.22]
	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:552) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:600) ~[jetty-security-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1440) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:505) ~[jetty-servlet-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1355) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.Server.handle(Server.java:516) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) ~[jetty-io-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) ~[jetty-io-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) ~[jetty-io-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338) ~[jetty-util-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315) ~[jetty-util-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) ~[jetty-util-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131) ~[jetty-util-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409) ~[jetty-util-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) ~[jetty-util-9.4.48.v20220622.jar:9.4.48.v20220622]
	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) ~[jetty-util-9.4.48.v20220622.jar:9.4.48.v20220622]
	at java.lang.Thread.run(Thread.java:829) ~[?:?]

attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/hduser/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Warning: Permanently added '192.168.56.118' (ECDSA) to the list of known hosts.
[email protected]: Permission denied (publickey).

org.apache.doris.shaded.com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "

Flink SQL>  INSERT INTO enriched_orders_doris
>  SELECT o.*, p.name, p.description, s.shipment_id, s.origin, s.destination, s.is_arrived
>  FROM orders AS o
>  LEFT JOIN products AS p ON o.product_id = p.id
>  LEFT JOIN shipments AS s ON o.order_id = s.order_id;
[INFO] Submitting SQL update statement to the cluster...
[ERROR] Could not execute SQL statement. Reason:
org.apache.doris.shaded.com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `int` from String "errCode = 7, detailMessage = unknown databases, dbName=default_cluster:db": not a valid `int` value
 at [Source: (String)""errCode = 7, detailMessage = unknown databases, dbName=default_cluster:db""; line: 1, column: 1]



org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy

2022-12-01 22:23:13
org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
	at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:139)
	at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getGlobalFailureHandlingResult(ExecutionFailureHandler.java:102)
	at org.apache.flink.runtime.scheduler.DefaultScheduler.handleGlobalFailure(DefaultScheduler.java:299)
	at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder$LazyInitializedCoordinatorContext.lambda$failJob$0(OperatorCoordinatorHolder.java:635)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRunAsync$4(AkkaRpcActor.java:453)
	at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:453)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:218)
	at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:84)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:168)
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
	at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
	at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
	at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
	at akka.actor.Actor.aroundReceive(Actor.scala:537)
	at akka.actor.Actor.aroundReceive$(Actor.scala:535)
	at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
	at akka.actor.ActorCell.invoke(ActorCell.scala:548)
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
	at akka.dispatch.Mailbox.run(Mailbox.scala:231)
	at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
	at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
	at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
	at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: org.apache.flink.util.FlinkException: Global failure triggered by OperatorCoordinator for 'Source: products[3]' (operator feca28aff5a3958840bee985ee7de4d3).
	at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder$LazyInitializedCoordinatorContext.failJob(OperatorCoordinatorHolder.java:617)
	at org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator$QuiesceableContext.failJob(RecreateOnResetOperatorCoordinator.java:237)
	at org.apache.flink.runtime.source.coordinator.SourceCoordinatorContext.failJob(SourceCoordinatorContext.java:360)
	at org.apache.flink.runtime.source.coordinator.SourceCoordinator.start(SourceCoordinator.java:217)
	at org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator$DeferrableCoordinator.applyCall(RecreateOnResetOperatorCoordinator.java:315)
	at org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.start(RecreateOnResetOperatorCoordinator.java:70)
	at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:198)
	at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:165)
	at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:82)
	at org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:605)
	at org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:1046)
	at org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:963)
	at org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:422)
	at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:198)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:622)
	at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:621)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:190)
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
	at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
	at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
	at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
	... 14 more
Caused by: org.apache.flink.table.api.ValidationException: The MySQL server has a timezone offset (0 seconds ahead of UTC) which does not match the configured timezone Asia/Shanghai. Specify the right server-time-zone to avoid inconsistencies for time-related fields.
	at com.ververica.cdc.connectors.mysql.MySqlValidator.checkTimeZone(MySqlValidator.java:191)
	at com.ververica.cdc.connectors.mysql.MySqlValidator.validate(MySqlValidator.java:81)
	at com.ververica.cdc.connectors.mysql.source.MySqlSource.createEnumerator(MySqlSource.java:170)
	at org.apache.flink.runtime.source.coordinator.SourceCoordinator.start(SourceCoordinator.java:213)
	... 34 more

java.lang.IllegalArgumentException: Journal dir 'file:/data/hadoop/dfs/journal' should be an absolute path

************************************************************/
2022-11-30 08:24:05,184 INFO server.JournalNode: registered UNIX signal handlers for [TERM, HUP, INT]
2022-11-30 08:24:05,441 ERROR server.JournalNode: Failed to start journalnode.
java.lang.IllegalArgumentException: Journal dir 'file:/data/hadoop/dfs/journal' should be an absolute path
	at org.apache.hadoop.hdfs.qjournal.server.JournalNode.validateAndCreateJournalDir(JournalNode.java:196)
	at org.apache.hadoop.hdfs.qjournal.server.JournalNode.start(JournalNode.java:222)
	at org.apache.hadoop.hdfs.qjournal.server.JournalNode.run(JournalNode.java:209)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:95)
	at org.apache.hadoop.hdfs.qjournal.server.JournalNode.main(JournalNode.java:437)
2022-11-30 08:24:05,446 INFO util.ExitUtil: Exiting with status -1: java.lang.IllegalArgumentException: Journal dir 'file:/data/hadoop/dfs/journal' should be an absolute path
2022-11-30 08:24:05,461 INFO server.JournalNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down JournalNode at vm120/192.168.56.120
************************************************************/

Error: Failed to load org.apache.kyuubi.engine.spark.SparkSQLEngine: scala/Serializable

2023-04-06 21:51:30.041 INFO org.apache.kyuubi.operation.LaunchEngine: Processing hduser's query[5bc7a677-c189-40d7-a01a-5e5b911ea437]: RUNNING_STATE -> ERROR_STATE, time taken: 3.546 seconds
Error: Failed to load org.apache.kyuubi.engine.spark.SparkSQLEngine: scala/Serializable
Exception in thread "main" java.lang.reflect.Undeclared23/04/06 21:51:28 INFO ShutdownHookManager: Shutdown hook called
23/04/06 21:51:28 INFO ShutdownHookManager: Deleting directory /tmp/spark-5abf7442-c612-4269-aba3-c1c188f688f1
a:163)
	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkUserAppException: User application exited with 101
	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:938)
	at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:165)
	at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
	... 6 more
Error: org.apache.kyuubi.KyuubiSQLException: Failed to detect the root cause, please check /opt/kyuubi/work/hduser/kyuubi-spark-sql-engine.log.34 at server side if necessary. The last 10 line(s) of log are:
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkUserAppException: User application exited with 101
	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:938)
	at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:165)
	at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
	... 6 more
	at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
	at org.apache.kyuubi.engine.ProcBuilder.getError(ProcBuilder.scala:271)
	at org.apache.kyuubi.engine.ProcBuilder.getError$(ProcBuilder.scala:264)
	at org.apache.kyuubi.engine.spark.SparkProcessBuilder.getError(SparkProcessBuilder.scala:37)
	at org.apache.kyuubi.engine.EngineRef.$anonfun$create$1(EngineRef.scala:206)
	at org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient.tryWithLock(ZookeeperDiscoveryClient.scala:180)
	at org.apache.kyuubi.engine.EngineRef.tryWithLock(EngineRef.scala:166)
	at org.apache.kyuubi.engine.EngineRef.create(EngineRef.scala:171)
	at org.apache.kyuubi.engine.EngineRef.$anonfun$getOrCreate$1(EngineRef.scala:266)
	at scala.Option.getOrElse(Option.scala:189)
	at org.apache.kyuubi.engine.EngineRef.getOrCreate(EngineRef.scala:266)
	at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2(KyuubiSessionImpl.scala:147)
	at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2$adapted(KyuubiSessionImpl.scala:123)
	at org.apache.kyuubi.ha.client.DiscoveryClientProvider$.withDiscoveryClient(DiscoveryClientProvider.scala:36)
	at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$1(KyuubiSessionImpl.scala:123)
	at org.apache.kyuubi.session.KyuubiSession.handleSessionException(KyuubiSession.scala:49)
	at org.apache.kyuubi.session.KyuubiSessionImpl.openEngineSession(KyuubiSessionImpl.scala:123)
	at org.apache.kyuubi.operation.LaunchEngine.$anonfun$runInternal$2(LaunchEngine.scala:60)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750) (state=,code=0)
Beeline version 1.7.0 by Apache Kyuubi

Caused by: java.util.concurrent.CompletionException: java.lang.RuntimeException: org.apache.flink.runtime.JobException: Cannot instantiate the coordinator for operator

2022-12-01 21:55:12
org.apache.flink.runtime.client.JobInitializationException: Could not start the JobMaster.
	at org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
	at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1705)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.util.concurrent.CompletionException: java.lang.RuntimeException: org.apache.flink.runtime.JobException: Cannot instantiate the coordinator for operator Source: orders[1]
	at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
	at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
	at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1702)
	... 3 more
Caused by: java.lang.RuntimeException: org.apache.flink.runtime.JobException: Cannot instantiate the coordinator for operator Source: orders[1]
	at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321)
	at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:114)
	at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700)
	... 3 more
Caused by: org.apache.flink.runtime.JobException: Cannot instantiate the coordinator for operator Source: orders[1]
	at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.initialize(ExecutionJobVertex.java:229)
	at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.initializeJobVertex(DefaultExecutionGraph.java:901)
	at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.initializeJobVertices(DefaultExecutionGraph.java:891)
	at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.attachJobGraph(DefaultExecutionGraph.java:848)
	at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.attachJobGraph(DefaultExecutionGraph.java:830)
	at org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildGraph(DefaultExecutionGraphBuilder.java:203)
	at org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:156)
	at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:361)
	at org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:206)
	at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:134)
	at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:152)
	at org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:119)
	at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:369)
	at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:346)
	at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:123)
	at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:95)
	at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112)
	... 4 more
Caused by: java.io.StreamCorruptedException: unexpected block data
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1704)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2346)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2346)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2346)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2346)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2346)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.readArray(ObjectInputStream.java:2109)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1675)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2346)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2346)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2346)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2346)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
	at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:489)
	at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:447)
	at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:617)
	at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:602)
	at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:589)
	at org.apache.flink.util.SerializedValue.deserializeValue(SerializedValue.java:67)
	at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.create(OperatorCoordinatorHolder.java:488)
	at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.createOperatorCoordinatorHolder(ExecutionJobVertex.java:286)
	at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.initialize(ExecutionJobVertex.java:223)
	... 20 more









File descriptor number is less than 60000. Please use (ulimit -n) to set a value equal or greater than 60000

E1128 09:54:02.321064  7108 storage_engine.cpp:426] File descriptor number is less than 60000. Please use (ulimit -n) to set a value equal or greater than 60000
W1128 09:54:02.321180  7108 storage_engine.cpp:188] check fd number failed, error: Internal error: file descriptors limit is too small
W1128 09:54:02.321193  7108 storage_engine.cpp:102] open engine failed, error: Internal error: file descriptors limit is too small
F1128 09:54:02.321691  7108 doris_main.cpp:395] fail to open StorageEngine, res=file descriptors limit is too small
*** Check failure stack trace: ***
    @     0x562fa255f30d  google::LogMessage::Fail()
    @     0x562fa2561849  google::LogMessage::SendToLog()
    @     0x562fa255ee76  google::LogMessage::Flush()
    @     0x562fa2561eb9  google::LogMessageFatal::~LogMessageFatal()
    @     0x562fa024cf8e  main
    @     0x7f1d6d4db083  __libc_start_main
    @     0x562fa048e86a  _start
    @              (nil)  (unknown)
*** Aborted at 1669600442 (unix time) try "date -d @1669600442" if you are using GNU date ***
*** SIGABRT unkown detail explain (@0x1bc4) received by PID 7108 (TID 0x7f1d6d490500) from PID 7108; stack trace: ***
 0# doris::signal::(anonymous namespace)::FailureSignalHandler(int, siginfo_t*, void*) at /mnt/disk2/ygl/code/github/apache-doris/be/src/common/signal_handler.h:420
 1# 0x00007F1D6D4FA090 in /lib/x86_64-linux-gnu/libc.so.6
 2# raise in /lib/x86_64-linux-gnu/libc.so.6
 3# abort in /lib/x86_64-linux-gnu/libc.so.6
 4# 0x0000562FA0137D38 in /opt/doris/be/lib/doris_be
 5# 0x0000562FA255F30D in /opt/doris/be/lib/doris_be
 6# google::LogMessage::SendToLog() in /opt/doris/be/lib/doris_be
 7# google::LogMessage::Flush() in /opt/doris/be/lib/doris_be
 8# google::LogMessageFatal::~LogMessageFatal() in /opt/doris/be/lib/doris_be
 9# main at /mnt/disk2/ygl/code/github/apache-doris/be/src/service/doris_main.cpp:390
10# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6
11# _start in /opt/doris/be/lib/doris_be

./start_be.sh: line 125:  7108 Aborted                 (core dumped) $LIMIT ${DORIS_HOME}/lib/doris_be "$@" 2>&1 < /dev/null

WARNING: UNPROTECTED PRIVATE KEY FILE!

hadoop2: Warning: Permanently added 'hadoop2,172.21.0.22' (ECDSA) to the list of known hosts.
hadoop4: Warning: Permanently added 'hadoop4,172.21.0.24' (ECDSA) to the list of known hosts.
hadoop3: Warning: Permanently added 'hadoop3,172.21.0.23' (ECDSA) to the list of known hosts.
hadoop2: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
hadoop2: @         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
hadoop2: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
hadoop2: Permissions 0644 for '/home/hduser/.ssh/id_rsa' are too open.
hadoop2: It is required that your private key files are NOT accessible by others.
hadoop2: This private key will be ignored.
hadoop2: Load key "/home/hduser/.ssh/id_rsa": bad permissions
hadoop2: hduser@hadoop2: Permission denied (publickey,password).
hadoop4: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
hadoop4: @         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
hadoop4: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
hadoop4: Permissions 0644 for '/home/hduser/.ssh/id_rsa' are too open.
hadoop4: It is required that your private key files are NOT accessible by others.
hadoop4: This private key will be ignored.
hadoop4: Load key "/home/hduser/.ssh/id_rsa": bad permissions
hadoop4: hduser@hadoop4: Permission denied (publickey,password).
hadoop3: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
hadoop3: @         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
hadoop3: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
hadoop3: Permissions 0644 for '/home/hduser/.ssh/id_rsa' are too open.
hadoop3: It is required that your private key files are NOT accessible by others.
hadoop3: This private key will be ignored.
hadoop3: Load key "/home/hduser/.ssh/id_rsa": bad permissions
hadoop3: hduser@hadoop3: Permission denied (publickey,password).
hduser@hadoop1:/opt/hadoop/sbin$ 

doris fe current node is not added to the group. please add it first.

2022-11-27 21:27:32,249 INFO (leaderCheckpointer|84) [Checkpoint.doCheckpoint():95] last checkpoint journal id: 0, current finalized journal id: 0
2022-11-27 21:27:32,334 INFO (tablet stat mgr|23) [TabletStatMgr.runAfterCatalogReady():125] finished to update index row num of all databases. cost: 0 ms
2022-11-27 21:27:32,477 INFO (tablet checker|29) [TabletChecker.checkTablets():329] finished to check tablets. unhealth/total/added/in_sched/not_ready: 0/0/0/0/0, cost: 0 ms
2022-11-27 21:27:34,849 WARN (qtp138306399-121|121) [MetaService.isFromValidFe():65] request is not from valid FE. client: 192.168.56.112
2022-11-27 21:27:39,894 WARN (qtp138306399-119|119) [MetaService.isFromValidFe():65] request is not from valid FE. client: 192.168.56.112
2022-11-27 21:27:44,921 WARN (qtp138306399-138|138) [MetaService.isFromValidFe():65] request is not from valid FE. client: 192.168.56.112
2022-11-27 21:27:49,952 WARN (qtp138306399-121|121) [MetaService.isFromValidFe():65] request is not from valid FE. client: 192.168.56.112
2022-11-27 21:27:52,491 INFO (tablet checker|29) [TabletChecker.checkTablets():329] finished to check tablets. unhealth/total/added/in_sched/not_ready: 0/0/0/0/0, cost: 0 ms
2022-11-27 21:27:54,977 WARN (qtp138306399-119|119) [MetaService.isFromValidFe():65] request is not from valid FE. client: 192.168.56.112
2022-11-27 21:28:00,013 WARN (qtp138306399-138|138) [MetaService.isFromValidFe():65] request is not from valid FE. client: 192.168.56.112
2022-11-27 21:28:05,069 WARN (qtp138306399-121|121) [MetaService.isFromValidFe():65] request is not from valid FE. client: 192.168.56.112
2022-11-27 21:28:10,134 WARN (qtp138306399-119|119) [MetaService.isFromValidFe():65] request is not from valid FE. client: 192.168.56.112
2022-11-27 21:28:12,502 INFO (tablet checker|29) [TabletChecker.checkTablets():329] finished to check tablets. unhealth/total/added/in_sched/not_ready: 0/0/0/0/0, cost: 0 ms
2022-11-27 21:28:15,166 WARN (qtp138306399-138|138) [MetaService.isFromValidFe():65] request is not from valid FE. client: 192.168.56.112
2022-11-27 21:28:20,248 WARN (qtp138306399-121|121) [MetaService.isFromValidFe():65] request is not from valid FE. client: 192.168.56.112
2022-11-27 21:28:03,352 WARN (main|1) [Catalog.getClusterIdAndRole():986] current node is not added to the group. please add it first. sleep 5 seconds and retry, current helper nodes: [192.168.56.111:9010]
2022-11-27 21:28:08,415 WARN (main|1) [Catalog.getFeNodeTypeAndNameFromHelpers():1112] failed to get fe node type from helper node: 192.168.56.111:9010.
2022-11-27 21:28:08,416 WARN (main|1) [Catalog.getClusterIdAndRole():986] current node is not added to the group. please add it first. sleep 5 seconds and retry, current helper nodes: [192.168.56.111:9010]

org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Could not resolve ResourceManager address akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0, retrying in 10000 ms: Could not connect to rpc endpoint under address akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0.

2022-12-01 18:34:34,845 INFO  org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Could not resolve ResourceManager address akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0, retrying in 10000 ms: Could not connect to rpc endpoint under address akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0.
2022-12-01 18:34:44,904 WARN  akka.remote.transport.netty.NettyTransport                   [] - Remote connection to [null] failed with java.net.ConnectException: Connection refused: vm116/192.168.56.116:20587
2022-12-01 18:34:44,911 WARN  akka.remote.ReliableDeliverySupervisor                       [] - Association with remote system [akka.tcp://flink@vm116:20587] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink@vm116:20587]] Caused by: [java.net.ConnectException: Connection refused: vm116/192.168.56.116:20587]
2022-12-01 18:34:44,944 INFO  org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Could not resolve ResourceManager address akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0, retrying in 10000 ms: Could not connect to rpc endpoint under address akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0.
2022-12-01 18:34:54,999 WARN  akka.remote.transport.netty.NettyTransport                   [] - Remote connection to [null] failed with java.net.ConnectException: Connection refused: vm116/192.168.56.116:20587
2022-12-01 18:34:55,007 WARN  akka.remote.ReliableDeliverySupervisor                       [] - Association with remote system [akka.tcp://flink@vm116:20587] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink@vm116:20587]] Caused by: [java.net.ConnectException: Connection refused: vm116/192.168.56.116:20587]
2022-12-01 18:34:55,016 INFO  org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Could not resolve ResourceManager address akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0, retrying in 10000 ms: Could not connect to rpc endpoint under address akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0.
2022-12-01 18:35:05,049 WARN  akka.remote.transport.netty.NettyTransport                   [] - Remote connection to [null] failed with java.net.ConnectException: Connection refused: vm116/192.168.56.116:20587
2022-12-01 18:35:05,054 WARN  akka.remote.ReliableDeliverySupervisor                       [] - Association with remote system [akka.tcp://flink@vm116:20587] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink@vm116:20587]] Caused by: [java.net.ConnectException: Connection refused: vm116/192.168.56.116:20587]
2022-12-01 18:35:05,058 INFO  org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Could not resolve ResourceManager address akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0, retrying in 10000 ms: Could not connect to rpc endpoint under address akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0.
2022-12-01 18:35:15,101 WARN  akka.remote.transport.netty.NettyTransport                   [] - Remote connection to [null] failed with java.net.ConnectException: Connection refused: vm116/192.168.56.116:20587
2022-12-01 18:35:15,102 WARN  akka.remote.ReliableDeliverySupervisor                       [] - Association with remote system [akka.tcp://flink@vm116:20587] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink@vm116:20587]] Caused by: [java.net.ConnectException: Connection refused: vm116/192.168.56.116:20587]
2022-12-01 18:35:15,112 INFO  org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Could not resolve ResourceManager address akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0, retrying in 10000 ms: Could not connect to rpc endpoint under address akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0.
2022-12-01 18:35:25,155 WARN  akka.remote.transport.netty.NettyTransport                   [] - Remote connection to [null] failed with java.net.ConnectException: Connection refused: vm116/192.168.56.116:20587
2022-12-01 18:35:25,166 WARN  akka.remote.ReliableDeliverySupervisor                       [] - Association with remote system [akka.tcp://flink@vm116:20587] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink@vm116:20587]] Caused by: [java.net.ConnectException: Connection refused: vm116/192.168.56.116:20587]
2022-12-01 18:35:25,172 INFO  org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Could not resolve ResourceManager address akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0, retrying in 10000 ms: Could not connect to rpc endpoint under address akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0.
2022-12-01 18:35:35,233 WARN  akka.remote.transport.netty.NettyTransport                   [] - Remote connection to [null] failed with java.net.ConnectException: Connection refused: vm116/192.168.56.116:20587
2022-12-01 18:35:35,237 WARN  akka.remote.ReliableDeliverySupervisor                       [] - Association with remote system [akka.tcp://flink@vm116:20587] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink@vm116:20587]] Caused by: [java.net.ConnectException: Connection refused: vm116/192.168.56.116:20587]
2022-12-01 18:35:35,258 INFO  org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Could not resolve ResourceManager address akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0, retrying in 10000 ms: Could not connect to rpc endpoint under address akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0.
2022-12-01 18:35:41,913 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner      [] - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
2022-12-01 18:35:41,916 INFO  org.apache.flink.runtime.blob.PermanentBlobCache             [] - Shutting down BLOB cache
2022-12-01 18:35:41,917 INFO  org.apache.flink.runtime.state.TaskExecutorStateChangelogStoragesManager [] - Shutting down TaskExecutorStateChangelogStoragesManager.
2022-12-01 18:35:41,917 INFO  org.apache.flink.runtime.blob.TransientBlobCache             [] - Shutting down BLOB cache
2022-12-01 18:35:41,925 INFO  org.apache.flink.runtime.io.disk.FileChannelManagerImpl      [] - FileChannelManager removed spill file directory /tmp/flink-netty-shuffle-bb3718ae-c8d9-4ea8-9a45-4291121d3390
2022-12-01 18:35:41,928 INFO  org.apache.flink.runtime.filecache.FileCache                 [] - removed file cache directory /tmp/flink-dist-cache-1b30d5af-a75b-4674-b228-dcaa90ffb588
2022-12-01 18:35:41,937 INFO  org.apache.flink.runtime.state.TaskExecutorLocalStateStoresManager [] - Shutting down TaskExecutorLocalStateStoresManager.
2022-12-01 18:35:41,977 INFO  org.apache.flink.runtime.io.disk.FileChannelManagerImpl      [] - FileChannelManager removed spill file directory /tmp/flink-io-e310b450-98fc-4966-a847-8006ee938f1e
2022-12-01 18:35:41,989 INFO  org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Stopping TaskExecutor akka.tcp://flink@localhost:7955/user/rpc/taskmanager_0.
2022-12-01 18:35:41,990 INFO  org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Terminating registration attempts towards ResourceManager akka.tcp://flink@vm116:20587/user/rpc/resourcemanager_0.
2022-12-01 18:35:42,008 INFO  org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Stop job leader service.
2022-12-01 18:35:42,008 INFO  org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - Stopping DefaultLeaderRetrievalService.
2022-12-01 18:35:42,009 INFO  org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalDriver [] - Closing ZookeeperLeaderRetrievalDriver{connectionInformationPath='/resource_manager/connection_info'}.
2022-12-01 18:35:42,042 INFO  org.apache.flink.runtime.io.network.NettyShuffleEnvironment  [] - Shutting down the network environment and its components.
2022-12-01 18:35:42,289 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator        [] - Shutting down remote daemon.
2022-12-01 18:35:42,291 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator        [] - Remote daemon shut down; proceeding with flushing remote transports.
2022-12-01 18:35:42,328 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator        [] - Remoting shut down.
2022-12-01 18:35:42,353 WARN  akka.actor.CoordinatedShutdown                               [] - Could not addJvmShutdownHook, due to: Shutdown in progress
2022-12-01 18:35:42,360 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator        [] - Shutting down remote daemon.
2022-12-01 18:35:42,361 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator        [] - Remote daemon shut down; proceeding with flushing remote transports.
2022-12-01 18:35:42,381 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator        [] - Remoting shut down.

MetaException(message:Version information not found in metastore.)

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2023-04-04 17:57:24: Starting Hive Metastore Server
bash: warning: setlocale: LC_ALL: cannot change locale (en_US.utf-8)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
MetaException(message:Version information not found in metastore.)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:84)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8672)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8667)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:8937)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:8854)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: MetaException(message:Version information not found in metastore.)
	at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:9085)
	at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:9063)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97)
	at com.sun.proxy.$Proxy25.verifySchema(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:699)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:692)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:769)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:540)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:80)
	... 11 more
Exception in thread "main" MetaException(message:Version information not found in metastore.)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:84)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8672)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8667)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:8937)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:8854)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: MetaException(message:Version information not found in metastore.)
	at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:9085)
	at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:9063)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97)
	at com.sun.proxy.$Proxy25.verifySchema(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:699)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:692)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:769)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:540)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:80)
	... 11 more

Vagrant is currently configured to create VirtualBox synced folders with the `SharedFoldersEnableSymlinksCreate` option enabled.

Vagrant is currently configured to create VirtualBox synced folders with
the `SharedFoldersEnableSymlinksCreate` option enabled. If the Vagrant
guest is not trusted, you may want to disable this option. For more
information on this option, please refer to the VirtualBox manual:

  https://www.virtualbox.org/manual/ch04.html#sharedfolders

This option can be disabled globally with an environment variable:

  VAGRANT_DISABLE_VBOXSYMLINKCREATE=1

or on a per folder basis within the Vagrantfile:

  config.vm.synced_folder '/host/path', '/guest/path', SharedFoldersEnableSymlinksCreate: false

java.io.IOException: Could not perform checkpoint 2 for operator SinkMaterializer[36] -> Sink: enriched_orders[36] (2/2)#0

2022-12-01 23:00:04,595 INFO  org.apache.flink.runtime.source.coordinator.SourceCoordinator [] - Marking checkpoint 1 as completed for source Source: orders[25].
2022-12-01 23:00:04,595 INFO  org.apache.flink.runtime.source.coordinator.SourceCoordinator [] - Marking checkpoint 1 as completed for source Source: products[27].
2022-12-01 23:00:06,519 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] - Triggering checkpoint 2 (type=CheckpointType{name='Checkpoint', sharingFilesStrategy=FORWARD_BACKWARD}) @ 1669906806494 for job 669fd3809635a13a875e92c90d5ada67.
2022-12-01 23:00:07,329 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - SinkMaterializer[36] -> Sink: enriched_orders[36] (2/2) (d895619b5a7986dffbe3bc1df0522100_5b5d46caf7fabdb9e789f1f4dac466a5_1_0) switched from RUNNING to FAILED on 192.168.56.116:9267-93cc4c @ vm116 (dataPort=15041).
java.io.IOException: Could not perform checkpoint 2 for operator SinkMaterializer[36] -> Sink: enriched_orders[36] (2/2)#0.
	at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:1238) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.io.checkpointing.CheckpointBarrierHandler.notifyCheckpoint(CheckpointBarrierHandler.java:147) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler.triggerCheckpoint(SingleCheckpointBarrierHandler.java:287) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler.access$100(SingleCheckpointBarrierHandler.java:64) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler$ControllerImpl.triggerGlobalCheckpoint(SingleCheckpointBarrierHandler.java:488) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.io.checkpointing.AbstractAlignedBarrierHandlerState.triggerGlobalCheckpoint(AbstractAlignedBarrierHandlerState.java:74) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.io.checkpointing.AbstractAlignedBarrierHandlerState.barrierReceived(AbstractAlignedBarrierHandlerState.java:66) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler.lambda$processBarrier$2(SingleCheckpointBarrierHandler.java:234) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler.markCheckpointAlignedAndTransformState(SingleCheckpointBarrierHandler.java:262) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler.processBarrier(SingleCheckpointBarrierHandler.java:231) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.io.checkpointing.CheckpointedInputGate.handleEvent(CheckpointedInputGate.java:181) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.io.checkpointing.CheckpointedInputGate.pollNext(CheckpointedInputGate.java:159) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:110) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:542) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:831) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:780) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:914) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550) ~[flink-dist-1.16.0.jar:1.16.0]
	at java.lang.Thread.run(Thread.java:829) ~[?:?]
	Suppressed: java.lang.RuntimeException: An error occurred in ElasticsearchSink.
		at org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.checkErrorAndRethrow(ElasticsearchSinkBase.java:426) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.close(ElasticsearchSinkBase.java:365) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:41) ~[flink-dist-1.16.0.jar:1.16.0]
		at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.close(AbstractUdfStreamOperator.java:114) ~[flink-dist-1.16.0.jar:1.16.0]
		at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:163) ~[flink-dist-1.16.0.jar:1.16.0]
		at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.closeAllOperators(RegularOperatorChain.java:125) ~[flink-dist-1.16.0.jar:1.16.0]
		at org.apache.flink.streaming.runtime.tasks.StreamTask.closeAllOperators(StreamTask.java:1025) ~[flink-dist-1.16.0.jar:1.16.0]
		at org.apache.flink.util.IOUtils.closeAll(IOUtils.java:255) ~[flink-dist-1.16.0.jar:1.16.0]
		at org.apache.flink.core.fs.AutoCloseableRegistry.doClose(AutoCloseableRegistry.java:72) ~[flink-dist-1.16.0.jar:1.16.0]
		at org.apache.flink.util.AbstractAutoCloseableRegistry.close(AbstractAutoCloseableRegistry.java:127) ~[flink-dist-1.16.0.jar:1.16.0]
		at org.apache.flink.streaming.runtime.tasks.StreamTask.cleanUp(StreamTask.java:943) ~[flink-dist-1.16.0.jar:1.16.0]
		at org.apache.flink.runtime.taskmanager.Task.lambda$restoreAndInvoke$0(Task.java:917) ~[flink-dist-1.16.0.jar:1.16.0]
		at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935) ~[flink-dist-1.16.0.jar:1.16.0]
		at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917) ~[flink-dist-1.16.0.jar:1.16.0]
		at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728) ~[flink-dist-1.16.0.jar:1.16.0]
		at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550) ~[flink-dist-1.16.0.jar:1.16.0]
		at java.lang.Thread.run(Thread.java:829) ~[?:?]
	Caused by: org.apache.flink.elasticsearch7.shaded.org.elasticsearch.ElasticsearchException: Elasticsearch exception [type=illegal_argument_exception, reason=mapper [price] cannot be changed from type [long] to [float]]
		at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:496) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:407) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.action.bulk.BulkItemResponse.fromXContent(BulkItemResponse.java:139) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.action.bulk.BulkResponse.fromXContent(BulkResponse.java:188) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1911) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient.lambda$performRequestAsyncAndParseEntity$10(RestHighLevelClient.java:1699) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:1781) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:636) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$1.completed(RestClient.java:376) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$1.completed(RestClient.java:370) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:181) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
		... 1 more
Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: Could not complete snapshot 2 for operator SinkMaterializer[36] -> Sink: enriched_orders[36] (2/2)#0. Failure reason: Checkpoint was declined.
	at org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.snapshotState(StreamOperatorStateHandler.java:269) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.snapshotState(StreamOperatorStateHandler.java:173) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:345) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.checkpointStreamOperator(RegularOperatorChain.java:228) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.buildOperatorSnapshotFutures(RegularOperatorChain.java:213) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.snapshotState(RegularOperatorChain.java:192) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.takeSnapshotSync(SubtaskCheckpointCoordinatorImpl.java:726) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.checkpointState(SubtaskCheckpointCoordinatorImpl.java:363) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$performCheckpoint$13(StreamTask.java:1281) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:1269) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:1226) ~[flink-dist-1.16.0.jar:1.16.0]
	... 22 more
Caused by: java.lang.RuntimeException: An error occurred in ElasticsearchSink.
	at org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.checkErrorAndRethrow(ElasticsearchSinkBase.java:426) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.checkAsyncErrorsAndRequests(ElasticsearchSinkBase.java:431) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.snapshotState(ElasticsearchSinkBase.java:344) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.trySnapshotFunctionState(StreamingFunctionUtils.java:118) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.snapshotFunctionState(StreamingFunctionUtils.java:99) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.snapshotState(AbstractUdfStreamOperator.java:87) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.snapshotState(StreamOperatorStateHandler.java:222) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.snapshotState(StreamOperatorStateHandler.java:173) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:345) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.checkpointStreamOperator(RegularOperatorChain.java:228) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.buildOperatorSnapshotFutures(RegularOperatorChain.java:213) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.snapshotState(RegularOperatorChain.java:192) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.takeSnapshotSync(SubtaskCheckpointCoordinatorImpl.java:726) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.checkpointState(SubtaskCheckpointCoordinatorImpl.java:363) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$performCheckpoint$13(StreamTask.java:1281) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:1269) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:1226) ~[flink-dist-1.16.0.jar:1.16.0]
	... 22 more
Caused by: org.apache.flink.elasticsearch7.shaded.org.elasticsearch.ElasticsearchException: Elasticsearch exception [type=illegal_argument_exception, reason=mapper [price] cannot be changed from type [long] to [float]]
	at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:496) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:407) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.action.bulk.BulkItemResponse.fromXContent(BulkItemResponse.java:139) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.action.bulk.BulkResponse.fromXContent(BulkResponse.java:188) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1911) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient.lambda$performRequestAsyncAndParseEntity$10(RestHighLevelClient.java:1699) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:1781) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:636) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$1.completed(RestClient.java:376) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$1.completed(RestClient.java:370) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:181) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	at org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591) ~[flink-sql-connector-elasticsearch7-1.16.0.jar:1.16.0]
	... 1 more
2022-12-01 23:00:07,380 INFO  org.apache.flink.runtime.jobmaster.JobMaster                 [] - 11 tasks will be restarted to recover the failed task d895619b5a7986dffbe3bc1df0522100_5b5d46caf7fabdb9e789f1f4dac466a5_1_0.
2022-12-01 23:00:07,380 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Job insert-into_default_catalog.default_database.enriched_orders (669fd3809635a13a875e92c90d5ada67) switched from state RUNNING to RESTARTING.
2022-12-01 23:00:07,381 WARN  org.apache.flink.runtime.checkpoint.CheckpointFailureManager [] - Failed to trigger or complete checkpoint 2 for job 669fd3809635a13a875e92c90d5ada67. (0 consecutive failed attempts so far)
org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint Coordinator is suspending.
	at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.stopCheckpointScheduler(CheckpointCoordinator.java:1926) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.checkpoint.CheckpointCoordinatorDeActivator.jobStatusChanges(CheckpointCoordinatorDeActivator.java:46) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.notifyJobStatusChange(DefaultExecutionGraph.java:1566) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.transitionState(DefaultExecutionGraph.java:1161) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.transitionState(DefaultExecutionGraph.java:1133) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.scheduler.SchedulerBase.transitionExecutionGraphState(SchedulerBase.java:571) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.scheduler.DefaultScheduler.addVerticesToRestartPending(DefaultScheduler.java:362) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.scheduler.DefaultScheduler.restartTasksWithDelay(DefaultScheduler.java:334) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeRestartTasks(DefaultScheduler.java:305) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:247) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.scheduler.DefaultScheduler.onTaskFailed(DefaultScheduler.java:240) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.scheduler.SchedulerBase.onTaskExecutionStateUpdate(SchedulerBase.java:738) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:715) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78) ~[flink-dist-1.16.0.jar:1.16.0]
	at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:477) ~[flink-dist-1.16.0.jar:1.16.0]
	at jdk.internal.reflect.GeneratedMethodAccessor65.invoke(Unknown Source) ~[?:?]
	at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
	at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:309) ~[flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83) ~[flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:307) ~[flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:222) ~[flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:84) ~[flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:168) ~[flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) [flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) [flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) [flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) [flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) [flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) [flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) [flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) [flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at akka.actor.Actor.aroundReceive(Actor.scala:537) [flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at akka.actor.Actor.aroundReceive$(Actor.scala:535) [flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220) [flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580) [flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at akka.actor.ActorCell.invoke(ActorCell.scala:548) [flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) [flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at akka.dispatch.Mailbox.run(Mailbox.scala:231) [flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at akka.dispatch.Mailbox.exec(Mailbox.scala:243) [flink-rpc-akka_0aad32e2-2924-43db-8369-4b5052950f45.jar:1.16.0]
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) [?:?]
	at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020) [?:?]
	at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) [?:?]
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) [?:?]
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183) [?:?]
2022-12-01 23:00:07,395 INFO  org.apache.flink.runtime.source.coordinator.SourceCoordinator [] - Removing registered reader after failure for subtask 0 (#0) of source Source: orders[25].
2022-12-01 23:00:07,394 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source: orders[25] (1/2) (d895619b5a7986dffbe3bc1df0522100_bc764cd8ddf7a0cff126f51c16239658_0_0) switched from RUNNING to CANCELING.
2022-12-01 23:00:07,402 INFO  org.apache.flink.runtime.source.coordinator.SourceCoordinator [] - Removing registered reader after failure for subtask 1 (#0) of source Source: orders[25].
2022-12-01 23:00:07,402 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source: orders[25] (2/2) (d895619b5a7986dffbe3bc1df0522100_bc764cd8ddf7a0cff126f51c16239658_1_0) switched from RUNNING to CANCELING.
2022-12-01 23:00:07,403 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source: products[27] (1/2) (d895619b5a7986dffbe3bc1df0522100_feca28aff5a3958840bee985ee7de4d3_0_0) switched from RUNNING to CANCELING.
2022-12-01 23:00:07,403 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Join[29] -> Calc[30] (2/2) (d895619b5a7986dffbe3bc1df0522100_f3c52ad168ea5842a0be53deff739f6c_1_0) switched from RUNNING to CANCELING.
2022-12-01 23:00:07,403 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source: shipments[32] (1/1) (d895619b5a7986dffbe3bc1df0522100_605b35e407e90cda15ad084365733fdd_0_0) switched from RUNNING to CANCELING.
2022-12-01 23:00:07,403 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Join[34] -> Calc[35] -> ConstraintEnforcer[36] (1/2) (d895619b5a7986dffbe3bc1df0522100_4bc97eb6511780b518f989f149963722_0_0) switched from RUNNING to CANCELING.
2022-12-01 23:00:07,404 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Join[34] -> Calc[35] -> ConstraintEnforcer[36] (2/2) (d895619b5a7986dffbe3bc1df0522100_4bc97eb6511780b518f989f149963722_1_0) switched from RUNNING to CANCELING.
2022-12-01 23:00:07,404 INFO  org.apache.flink.runtime.source.coordinator.SourceCoordinator [] - Removing registered reader after failure for subtask 0 (#0) of source Source: products[27].
2022-12-01 23:00:07,404 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Join[29] -> Calc[30] (1/2) (d895619b5a7986dffbe3bc1df0522100_f3c52ad168ea5842a0be53deff739f6c_0_0) switched from RUNNING to CANCELING.
2022-12-01 23:00:07,404 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source: products[27] (2/2) (d895619b5a7986dffbe3bc1df0522100_feca28aff5a3958840bee985ee7de4d3_1_0) switched from RUNNING to CANCELING.

kafka.common.InconsistentClusterIdException

[2022-12-09 07:55:54,663] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2022-12-09 07:55:56,188] INFO Cluster ID = TSEPCOBgTWiL9AEHhLmQ1g (kafka.server.KafkaServer)
[2022-12-09 07:55:56,224] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID TSEPCOBgTWiL9AEHhLmQ1g doesn't match stored clusterId Some(SuTu3wnsSiyPOayrKvZVMg) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
	at kafka.server.KafkaServer.startup(KafkaServer.scala:224)
	at kafka.Kafka$.main(Kafka.scala:109)
	at kafka.Kafka.main(Kafka.scala)
[2022-12-09 07:55:56,227] INFO shutting down (kafka.server.KafkaServer)
[2022-12-09 07:55:56,231] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)
[2022-12-09 07:55:56,374] INFO Session: 0x200030b29950000 closed (org.apache.zookeeper.ZooKeeper)
[2022-12-09 07:55:56,374] INFO EventThread shut down for session: 0x200030b29950000 (org.apache.zookeeper.ClientCnxn)
[2022-12-09 07:55:56,393] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
[2022-12-09 07:55:56,409] INFO App info kafka.server for 1 unregistered (org.apache.kafka.common.utils.AppInfoParser)
[2022-12-09 07:55:56,410] INFO shut down completed (kafka.server.KafkaServer)
[2022-12-09 07:55:56,410] ERROR Exiting Kafka due to fatal exception during startup. (kafka.Kafka$)
kafka.common.InconsistentClusterIdException: The Cluster ID TSEPCOBgTWiL9AEHhLmQ1g doesn't match stored clusterId Some(SuTu3wnsSiyPOayrKvZVMg) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
	at kafka.server.KafkaServer.startup(KafkaServer.scala:224)
	at kafka.Kafka$.main(Kafka.scala:109)
	at kafka.Kafka.main(Kafka.scala)
[2022-12-09 07:55:56,417] INFO shutting down (kafka.server.KafkaServer)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.