Giter Club home page Giter Club logo

flink-connector-redis's Introduction

EN

1 项目介绍

基于bahir-flink二次开发,相对bahir调整的内容有:

1.使用Lettuce替换Jedis,同步读写改为异步读写,大幅度提升了性能 
2.增加了Table/SQL API,增加select/维表join查询支持
3.增加关联查询缓存(支持增量与全量)
4.增加支持整行保存功能,用于多字段的维表关联查询
5.增加限流功能,用于Flink SQL在线调试功能
6.增加支持Flink高版本(包括1.12,1.13,1.14+)
7.统一过期策略等
8.支持flink cdc删除及其它RowKind.DELETE
9.支持select查询

因bahir使用的flink接口版本较老,所以改动较大,开发过程中参考了腾讯云与阿里云两家产商的流计算产品,取两家之长,并增加了更丰富的功能。

注:redis不支持两段提交无法实现刚好一次语义。

2 使用方法:

2.1 工程直接引用

项目依赖Lettuce(6.2.1)及netty-transport-native-epoll(4.1.82.Final),如flink环境有这两个包,则使用flink-connector-redis-1.4.2.jar, 否则使用flink-connector-redis-1.4.2-jar-with-dependencies.jar。

<dependency>
    <groupId>io.github.jeff-zou</groupId>
    <artifactId>flink-connector-redis</artifactId>
    <!-- 没有单独引入项目依赖Lettuce netty-transport-native-epoll依赖时 -->
    <!--            <classifier>jar-with-dependencies</classifier>-->
    <version>1.4.2</version>
</dependency>

2.2 自行打包

打包命令: mvn package,将生成的包放入flink lib中即可,无需其它设置。

2.3 使用示例

-- 创建redis表示例
create table redis_table (name varchar, age int) 
  with ('connector'='redis', 'host'='10.11.69.176', 'port'='6379','password'='test123', 
  'redis-mode'='single','command'='set');
-- 写入  
  insert into redis_table select * from (values('test', 1));

-- 查询  
  insert into redis_table select name,age + 1 from redis_table /*+ options('scan.key'='test') */
  
create table gen_table (age int , level int, proctime as procTime()) with ('connector'='datagen','fields.age.kind' = 'sequence',
 'fields.age.start' = '2','fields.age.end' = '2','fields.level.kind' = 'sequence','fields.level.start' = '10','fields.level.end' = '10'); 

-- 关联查询 
insert into redis_table select 'test', j.age + 10 from gen_table s left join redis_table  for system_time as of proctime as j
on j.name = 'test'

3 参数说明:

3.1 主要参数:

字段 默认值 类型 说明
connector (none) String redis
host (none) String Redis IP
port 6379 Integer Redis 端口
password null String 如果没有设置,则为 null
database 0 Integer 默认使用 db0
timeout 2000 Integer 连接超时时间,单位 ms,默认 1s
cluster-nodes (none) String 集群ip与端口,当redis-mode为cluster时不为空,如:10.11.80.147:7000,10.11.80.147:7001,10.11.80.147:8000
command (none) String 对应上文中的redis命令
redis-mode (none) Integer mode类型: single cluster sentinel
lookup.cache.max-rows -1 Integer 查询缓存大小,减少对redis重复key的查询
lookup.cache.ttl -1 Integer 查询缓存过期时间,单位为秒, 开启查询缓存条件是max-rows与ttl都不能为-1
lookup.cache.load-all false Boolean 开启全量缓存,当命令为hget时,将从redis map查询出所有元素并保存到cache中,用于解决缓存穿透问题
max.retries 1 Integer 写入/查询失败重试次数
value.data.structure column String column: value值来自某一字段 (如, set: key值取自DDL定义的第一个字段, value值取自第二个字段)
row: 将整行内容保存至value并以'\01'分割
set.if.absent false Boolean 在key不存在时才写入,只对set hset有效
io.pool.size (none) Integer Lettuce内netty的io线程池大小,默认情况下该值为当前JVM可用线程数,并且大于2
event.pool.size (none) Integer Lettuce内netty的event线程池大小 ,默认情况下该值为当前JVM可用线程数,并且大于2
scan.key (none) String 查询时redis key
scan.addition.key (none) String 查询时限定redis key,如map结构时的hashfield
scan.range.start (none) Integer 查询list结构时指定lrange start
scan.range.stop (none) Integer 查询list结构时指定lrange start
scan.count (none) Integer 查询set结构时指定srandmember count

3.1.1 command值与redis命令对应关系:

command值 写入 查询 维表关联 删除(Flink CDC等产生的RowKind.delete)
set set get get del
hset hset hget hget hdel
get set get get del
hset hset hget hget hdel
rpush rpush lrange
lpush lpush lrange
incrBy incrByFloat incrBy incrByFloat get get 写入相对值,如:incrby 2 -> incryby -2
hincrBy hincryByFloat hincrBy hincryByFloat hget hget 写入相对值,如:hincrby 2 -> hincryby -2
zincrby zincrby zscore zscore 写入相对值,如:zincrby 2 -> zincryby -2
sadd sadd srandmember 10 srem
zadd zadd zscore zscore zrem
pfadd(hyperloglog) pfadd(hyperloglog)
publish publish
zrem zrem zscore zscore
srem srem srandmember 10
del del get get
hdel hdel hget hget
decrBy decrBy get get

注:为空表示不支持

3.1.2 value.data.structure = column(默认)

无需通过primary key来映射redis中的Key,直接由ddl中的字段顺序来决定Key,如:

create table sink_redis(username VARCHAR, passport VARCHAR)  with ('command'='set') 
其中username为key, passport为value.

create table sink_redis(name VARCHAR, subject VARCHAR, score VARCHAR)  with ('command'='hset') 
其中name为map结构的key, subject为field, score为value.

3.1.3 value.data.structure = row

整行内容保存至value并以'\01'分割

create table sink_redis(username VARCHAR, passport VARCHAR)  with ('command'='set') 
其中username为key, username\01passport为value.

create table sink_redis(name VARCHAR, subject VARCHAR, score VARCHAR)  with ('command'='hset') 
其中name为map结构的key, subject为field, name\01subject\01score为value.

3.2 sink时ttl相关参数

Field Default Type Description
ttl (none) Integer key过期时间(秒),每次sink时会设置ttl
ttl.on.time (none) String key的过期时间点,格式为LocalTime.toString(), eg: 10:00 12:12:01,当ttl未配置时才生效
ttl.key.not.absent false boolean 与ttl一起使用,当key不存在时才设置ttl

3.3 在线调试SQL时,用于限制sink资源使用的参数:

Field Default Type Description
sink.limit false Boolean 是否打开限制
sink.limit.max-num 10000 Integer taskmanager内每个slot可以写的最大数据量
sink.limit.interval 100 String taskmanager内每个slot写入数据间隔 milliseconds
sink.limit.max-online 30 * 60 * 1000L Long taskmanager内每个slot最大在线时间, milliseconds

3.4 集群类型为sentinel时额外连接参数:

字段 默认值 类型 说明
master.name (none) String 主名
sentinels.info (none) String 如:10.11.80.147:7000,10.11.80.147:7001,10.11.80.147:8000
sentinels.password (none) String sentinel进程密码

4 数据类型转换

flink type redis row converter
CHAR String
VARCHAR String
String String
BOOLEAN String String.valueOf(boolean val)
boolean Boolean.valueOf(String str)
BINARY String Base64.getEncoder().encodeToString
byte[] Base64.getDecoder().decode(String str)
VARBINARY String Base64.getEncoder().encodeToString
byte[] Base64.getDecoder().decode(String str)
DECIMAL String BigDecimal.toString
DecimalData DecimalData.fromBigDecimal(new BigDecimal(String str),int precision, int scale)
TINYINT String String.valueOf(byte val)
byte Byte.valueOf(String str)
SMALLINT String String.valueOf(short val)
short Short.valueOf(String str)
INTEGER String String.valueOf(int val)
int Integer.valueOf(String str)
DATE String the day from epoch as int
date show as 2022-01-01
TIME String the millisecond from 0'clock as int
time show as 04:04:01.023
BIGINT String String.valueOf(long val)
long Long.valueOf(String str)
FLOAT String String.valueOf(float val)
float Float.valueOf(String str)
DOUBLE String String.valueOf(double val)
double Double.valueOf(String str)
TIMESTAMP String the millisecond from epoch as long
timestamp TimeStampData.fromEpochMillis(Long.valueOf(String str))

5 使用示例:

  • 5.1 维表查询:

create table sink_redis(name varchar, level varchar, age varchar) with ( 'connector'='redis', 'host'='10.11.80.147','port'='7001', 'redis-mode'='single','password'='******','command'='hset');

-- 先在redis中插入数据,相当于redis命令: hset 3 3 100 --
insert into sink_redis select * from (values ('3', '3', '100'));
                
create table dim_table (name varchar, level varchar, age varchar) with ('connector'='redis', 'host'='10.11.80.147','port'='7001', 'redis-mode'='single', 'password'='*****','command'='hget', 'maxIdle'='2', 'minIdle'='1', 'lookup.cache.max-rows'='10', 'lookup.cache.ttl'='10', 'max-retries'='3');
    
-- 随机生成10以内的数据作为数据源 --
-- 其中有一条数据会是: username = 3  level = 3, 会跟上面插入的数据关联 -- 
create table source_table (username varchar, level varchar, proctime as procTime()) with ('connector'='datagen',  'rows-per-second'='1',  'fields.username.kind'='sequence',  'fields.username.start'='1',  'fields.username.end'='10', 'fields.level.kind'='sequence',  'fields.level.start'='1',  'fields.level.end'='10');

create table sink_table(username varchar, level varchar,age varchar) with ('connector'='print');

insert into
	sink_table
select
	s.username,
	s.level,
	d.age
from
	source_table s
left join dim_table for system_time as of s.proctime as d on
	d.name = s.username
	and d.level = s.level;
-- username为3那一行会关联到redis内的值,输出为: 3,3,100	
  • 5.2 多字段的维表关联查询

很多情况维表有多个字段,本实例展示如何利用'value.data.structure'='row'写多字段并关联查询。

-- 创建表
create table sink_redis(uid VARCHAR,score double,score2 double )
with ( 'connector' = 'redis',
			'host' = '10.11.69.176',
			'port' = '6379',
			'redis-mode' = 'single',
			'password' = '****',
			'command' = 'SET',
			'value.data.structure' = 'row');  -- 'value.data.structure'='row':整行内容保存至value并以'\01'分割
-- 写入测试数据,score、score2为需要被关联查询出的两个维度
insert into sink_redis select * from (values ('1', 10.3, 10.1));

-- 在redis中,value的值为: "1\x0110.3\x0110.1" --
-- 写入结束 --

-- create join table --
create table join_table with ('command'='get', 'value.data.structure'='row') like sink_redis

-- create result table --
create table result_table(uid VARCHAR, username VARCHAR, score double, score2 double) with ('connector'='print')

-- create source table --
create table source_table(uid VARCHAR, username VARCHAR, proc_time as procTime()) with ('connector'='datagen', 'fields.uid.kind'='sequence', 'fields.uid.start'='1', 'fields.uid.end'='2')

-- 关联查询维表,获得维表的多个字段值 --
insert
	into
	result_table
select
	s.uid,
	s.username,
	j.score, -- 来自维表
	j.score2 -- 来自维表
from
	source_table as s
join join_table for system_time as of s.proc_time as j on
	j.uid = s.uid
	
result:
2> +I[2, 1e0fe885a2990edd7f13dd0b81f923713182d5c559b21eff6bda3960cba8df27c69a3c0f26466efaface8976a2e16d9f68b3, null, null]
1> +I[1, 30182e00eca2bff6e00a2d5331e8857a087792918c4379155b635a3cf42a53a1b8f3be7feb00b0c63c556641423be5537476, 10.3, 10.1]
  • 5.3 DataStream查询方式

    示例代码路径: src/test/java/org.apache.flink.streaming.connectors.redis.datastream.DataStreamTest.java
    hset示例,相当于redis命令:hset tom math 150

        Configuration configuration = new Configuration();
        configuration.setString(REDIS_MODE, REDIS_SINGLE);
        configuration.setString(REDIS_COMMAND, RedisCommand.HSET.name());
        configuration.setInteger(TTL, 10);

        RedisSinkMapper redisMapper = new RowRedisSinkMapper(RedisCommand.HSET, configuration);

        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        BinaryRowData binaryRowData = new BinaryRowData(3);
        BinaryRowWriter binaryRowWriter = new BinaryRowWriter(binaryRowData);
        binaryRowWriter.writeString(0, StringData.fromString("tom"));
        binaryRowWriter.writeString(1, StringData.fromString("math"));
        binaryRowWriter.writeString(2, StringData.fromString("152"));

        DataStream<BinaryRowData> dataStream = env.fromElements(binaryRowData, binaryRowData);

        List<String> columnNames = Arrays.asList("name", "subject", "scope");
        List<DataType> columnDataTypes =
                Arrays.asList(DataTypes.STRING(), DataTypes.STRING(), DataTypes.STRING());
        ResolvedSchema resolvedSchema = ResolvedSchema.physical(columnNames, columnDataTypes);

        FlinkConfigBase conf =
                new FlinkSingleConfig.Builder()
                        .setHost(REDIS_HOST)
                        .setPort(REDIS_PORT)
                        .setPassword(REDIS_PASSWORD)
                        .build();

        RedisSinkFunction redisSinkFunction =
                new RedisSinkFunction<>(conf, redisMapper, resolvedSchema, configuration);

        dataStream.addSink(redisSinkFunction).setParallelism(1);
        env.execute("RedisSinkTest");
  • 5.4 redis-cluster写入示例

    示例代码路径: src/test/java/org.apache.flink.streaming.connectors.redis.table.SQLInsertTest.java
    set示例,相当于redis命令: set test test11

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
EnvironmentSettings environmentSettings = EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, environmentSettings);

String ddl = "create table sink_redis(username VARCHAR, passport VARCHAR) with ( 'connector'='redis', " +
              "'cluster-nodes'='10.11.80.147:7000,10.11.80.147:7001','redis- mode'='cluster','password'='******','command'='set')" ;

tEnv.executeSql(ddl);
String sql = " insert into sink_redis select * from (values ('test', 'test11'))";
TableResult tableResult = tEnv.executeSql(sql);
tableResult.getJobClient().get()
.getJobExecutionResult()
.get();

6 解决问题联系我

img.png

7 开发环境

ide: IntelliJ IDEA

code format: google-java-format + Save Actions

flink 1.12/1.13/1.14+

jdk1.8 Lettuce 6.2.1

8 贡献

Pull Request需要提交至dev分支
提交前请使用mvn spotless:apply进行代码格式化,然后使用maven package打包确认所有测试用例能通过。

9 flink 1.12支持

请切换到分支flink-1.12(注:1.12使用jedis)

<dependency>
    <groupId>io.github.jeff-zou</groupId>
    <artifactId>flink-connector-redis</artifactId>
    <version>1.1.1-1.12</version>
</dependency>

flink-connector-redis's People

Contributors

fullee avatar janeslau avatar jeff-zou avatar jszouxue avatar stonewuu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

flink-connector-redis's Issues

哨兵模式下连接失败

1.编写flinksql如下
image
使用的依赖包:
image
将依赖包放到flink/lib下面,使用standalone模式进行提交。出现报错如下:
2023-04-07 15:44:52
org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:301)
at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:291)
at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:282)
at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:739)
at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78)
at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:443)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:304)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:302)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at akka.actor.Actor.aroundReceive(Actor.scala:537)
at akka.actor.Actor.aroundReceive$(Actor.scala:535)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
at akka.actor.ActorCell.invoke(ActorCell.scala:548)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
at akka.dispatch.Mailbox.run(Mailbox.scala:231)
at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
Caused by: io.lettuce.core.RedisConnectionException: Cannot connect to a Redis Sentinel: [redis://@10.16.216.111:26379, redis://@10.16.216.112:26379, redis://@10.16.216.113:26379]
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:72)
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:56)
at io.lettuce.core.AbstractRedisClient.getConnection(AbstractRedisClient.java:350)
at io.lettuce.core.RedisClient.connect(RedisClient.java:216)
at io.lettuce.core.RedisClient.connect(RedisClient.java:201)
at org.apache.flink.streaming.connectors.redis.common.container.RedisContainer.open(RedisContainer.java:57)
at org.apache.flink.streaming.connectors.redis.table.RedisSinkFunction.open(RedisSinkFunction.java:282)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:100)
at org.apache.flink.table.runtime.operators.sink.SinkOperator.open(SinkOperator.java:58)
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:107)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:703)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.call(StreamTaskActionExecutor.java:100)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:679)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:646)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.lettuce.core.RedisConnectionException: Cannot connect Redis Sentinel at redis://
@10.16.216.113:26379
at io.lettuce.core.RedisClient.lambda$connectSentinelAsync$6(RedisClient.java:527)
at reactor.core.publisher.Mono.lambda$onErrorMap$31(Mono.java:3776)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:106)
at reactor.core.publisher.Operators.error(Operators.java:198)
at reactor.core.publisher.MonoError.subscribe(MonoError.java:53)
at reactor.core.publisher.Mono.subscribe(Mono.java:4455)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:103)
at reactor.core.publisher.MonoCompletionStage.lambda$subscribe$0(MonoCompletionStage.java:86)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at io.lettuce.core.AbstractRedisClient.lambda$null$5(AbstractRedisClient.java:470)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at io.lettuce.core.protocol.RedisHandshakeHandler.lambda$fail$4(RedisHandshakeHandler.java:139)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
at io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:184)
at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:95)
at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:30)
at io.lettuce.core.protocol.RedisHandshakeHandler.fail(RedisHandshakeHandler.java:138)
at io.lettuce.core.protocol.RedisHandshakeHandler.lambda$channelActive$3(RedisHandshakeHandler.java:107)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:778)
at java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2140)
at java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:110)
at io.lettuce.core.protocol.RedisHandshakeHandler.channelActive(RedisHandshakeHandler.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:230)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:216)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelActive(AbstractChannelHandlerContext.java:209)
at io.netty.channel.ChannelInboundHandlerAdapter.channelActive(ChannelInboundHandlerAdapter.java:69)
at io.lettuce.core.ChannelGroupListener.channelActive(ChannelGroupListener.java:57)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:230)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:216)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelActive(AbstractChannelHandlerContext.java:209)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelActive(DefaultChannelPipeline.java:1398)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:230)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:216)
at io.netty.channel.DefaultChannelPipeline.fireChannelActive(DefaultChannelPipeline.java:895)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:658)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
Caused by: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
at io.netty.channel.AbstractChannel$AbstractUnsafe.shutdownOutput(AbstractChannel.java:650)
at io.netty.channel.AbstractChannel$AbstractUnsafe.handleWriteError(AbstractChannel.java:953)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:933)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.flush0(AbstractEpollChannel.java:557)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:895)
at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1372)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:750)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:742)
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:728)
at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:127)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:750)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:765)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:790)
at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:758)
at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:808)
at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1025)
at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:306)
at io.lettuce.core.RedisHandshake.dispatch(RedisHandshake.java:250)
at io.lettuce.core.RedisHandshake.dispatchHello(RedisHandshake.java:208)
at io.lettuce.core.RedisHandshake.initiateHandshakeResp3(RedisHandshake.java:196)
at io.lettuce.core.RedisHandshake.tryHandshakeResp3(RedisHandshake.java:97)
at io.lettuce.core.RedisHandshake.initialize(RedisHandshake.java:85)
at io.lettuce.core.protocol.RedisHandshakeHandler.channelActive(RedisHandshakeHandler.java:100)
... 21 more
Caused by: java.lang.UnsatisfiedLinkError: io.netty.channel.unix.Socket.sendAddress(IJII)I
at io.netty.channel.unix.Socket.sendAddress(Native Method)
at io.netty.channel.unix.Socket.sendAddress(Socket.java:302)
at io.netty.channel.epoll.AbstractEpollChannel.doWriteBytes(AbstractEpollChannel.java:362)
at io.netty.channel.epoll.AbstractEpollStreamChannel.writeBytes(AbstractEpollStreamChannel.java:260)
at io.netty.channel.epoll.AbstractEpollStreamChannel.doWriteSingle(AbstractEpollStreamChannel.java:471)
at io.netty.channel.epoll.AbstractEpollStreamChannel.doWrite(AbstractEpollStreamChannel.java:429)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:931)
... 41 more

[flink1.16.0]java.lang.UnsatisfiedLinkError: 'int io.netty.channel.unix.Socket.sendAddress(int, long, int, int)'

io.lettuce.core.RedisConnectionException: Unable to connect to x.x.x.x:6379
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:78)
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:56)
at io.lettuce.core.AbstractRedisClient.getConnection(AbstractRedisClient.java:350)
at io.lettuce.core.RedisClient.connect(RedisClient.java:215)
at io.lettuce.core.RedisClient.connect(RedisClient.java:200)
at org.apache.flink.streaming.connectors.redis.common.container.RedisContainer.open(RedisContainer.java:57)
at org.apache.flink.streaming.connectors.redis.table.RedisSinkFunction.open(RedisSinkFunction.java:312)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:100)
at org.apache.flink.table.runtime.operators.sink.SinkOperator.open(SinkOperator.java:58)
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:107)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:726)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:702)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:669)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
at io.netty.channel.AbstractChannel$AbstractUnsafe.shutdownOutput(AbstractChannel.java:650)
at io.netty.channel.AbstractChannel$AbstractUnsafe.handleWriteError(AbstractChannel.java:953)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:933)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.flush0(AbstractEpollChannel.java:557)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:895)
at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1372)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:921)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:907)
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:893)
at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:127)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:923)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:941)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:966)
at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:934)
at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:984)
at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1025)
at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:306)
at io.lettuce.core.RedisHandshake.dispatch(RedisHandshake.java:287)
at io.lettuce.core.RedisHandshake.dispatchHello(RedisHandshake.java:224)
at io.lettuce.core.RedisHandshake.initiateHandshakeResp3(RedisHandshake.java:212)
at io.lettuce.core.RedisHandshake.tryHandshakeResp3(RedisHandshake.java:102)
at io.lettuce.core.RedisHandshake.initialize(RedisHandshake.java:89)
at io.lettuce.core.protocol.RedisHandshakeHandler.channelActive(RedisHandshakeHandler.java:100)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:262)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:238)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelActive(AbstractChannelHandlerContext.java:231)
at io.netty.channel.ChannelInboundHandlerAdapter.channelActive(ChannelInboundHandlerAdapter.java:69)
at io.lettuce.core.ChannelGroupListener.channelActive(ChannelGroupListener.java:57)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:262)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:238)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelActive(AbstractChannelHandlerContext.java:231)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelActive(DefaultChannelPipeline.java:1398)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:258)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:238)
at io.netty.channel.DefaultChannelPipeline.fireChannelActive(DefaultChannelPipeline.java:895)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:658)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
Caused by: java.lang.UnsatisfiedLinkError: 'int io.netty.channel.unix.Socket.sendAddress(int, long, int, int)'
at io.netty.channel.unix.Socket.sendAddress(Native Method)
at io.netty.channel.unix.Socket.sendAddress(Socket.java:302)
at io.netty.channel.epoll.AbstractEpollChannel.doWriteBytes(AbstractEpollChannel.java:362)
at io.netty.channel.epoll.AbstractEpollStreamChannel.writeBytes(AbstractEpollStreamChannel.java:260)
at io.netty.channel.epoll.AbstractEpollStreamChannel.doWriteSingle(AbstractEpollStreamChannel.java:471)
at io.netty.channel.epoll.AbstractEpollStreamChannel.doWrite(AbstractEpollStreamChannel.java:429)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:931)
... 41 more

同学,您这个项目引入了35个开源组件,存在4个漏洞,辛苦升级一下

检测到 jeff-zou/flink-connector-redis 一共引入了35个开源组件,存在4个漏洞

漏洞标题:Apache Commons Compress 安全漏洞
缺陷组件:org.apache.commons:[email protected]
漏洞编号:CVE-2021-35517
漏洞描述:Apache Commons Compress是美国阿帕奇(Apache)基金会的一个用于处理压缩文件的库。
Apache Commons Compress存在资源管理错误漏洞,该漏洞源于当读取特殊设计的TAR归档文件时,Compress可以分配大量内存,从而导致小输入出现内存不足错误。
影响范围:[1.1, 1.21)
最小修复版本:1.21
缺陷组件引入路径:org.apache.xsj:[email protected]>org.apache.flink:[email protected]>org.apache.flink:[email protected]>org.apache.commons:[email protected]

另外还有4个漏洞,详细报告:https://mofeisec.com/jr?p=aa6651

报错 no match redis

用的是flink-connector-redis-1.2.7-jar-with-dependencies.jar flink1.15
sql:

create table sink_redis(user_key VARCHAR, field VARCHAR, inform VARCHAR) with (
'connector' = 'redis',
'host' = 'xxxx',
'port' = 'xxxx',
'password' = 'xxx',
'database' = '8',
'ttl' = '86400',
--sink时key过期时间(秒)
'command' = 'hset'
);

报错:
Caused by: java.lang.RuntimeException: no match redis
at org.apache.flink.streaming.connectors.redis.common.hanlder.RedisHandlerServices.filterByContext(RedisHandlerServices.java:166)
at org.apache.flink.streaming.connectors.redis.common.hanlder.RedisHandlerServices.filter(RedisHandlerServices.java:90)
at org.apache.flink.streaming.connectors.redis.common.hanlder.RedisHandlerServices.findSingRedisHandler(RedisHandlerServices.java:76)
at org.apache.flink.streaming.connectors.redis.common.hanlder.RedisHandlerServices.findRedisHandler(RedisHandlerServices.java:58)
at org.apache.flink.streaming.connectors.redis.table.RedisDynamicTableSink.(RedisDynamicTableSink.java:42)
at org.apache.flink.streaming.connectors.redis.table.RedisDynamicTableFactory.createDynamicTableSink(RedisDynamicTableFactory.java:65)
at org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:259)

对于redis消费相关问题

针对redis list的lpop和rpop 消费,如kafka一样去通过流消费,有没有加入这个项目的想法。

sql-client执行hget的问题

我在redis中执行
hset school class1 stu1 class2 stu2
之后再sql-client执行
create table sink_redis_get (org varchar, sec_org varchar, v varchar) with ('connector'='redis','host'='localhost','port'='6379','password'='***','redis-mode'='single','command'='hget');
然后报错了。

[ERROR] Could not execute SQL statement. Reason: org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There are not enough rules to produce a node with desired properties: convention=STREAM_PHYSICAL, FlinkRelDistributionTraitDef=any, MiniBatchIntervalTraitDef=None: 0, ModifyKindSetTraitDef=[NONE], UpdateKindTraitDef=[NONE]. Missing conversion is FlinkLogicalTableSourceScan[convention: LOGICAL -> STREAM_PHYSICAL] There is 1 empty subset: rel#170:RelSubset#6.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE], the relevant part of the original plan is as follows

java.lang.VerifyError: Bad return type

create table sink_redis(
name string,
level string,
age string)
with (
'connector'='redis',
'host'='192.168.90.104',
'port'='6379',
'redis-mode'='single',
'password'='123456',
'command'='hset');

select * from sink_redis;

[ERROR] Could not execute SQL statement. Reason:
java.lang.VerifyError: Bad return type
Exception Details:
Location: org/apache/flink/streaming/connectors/redis/common/mapper/row/sink/RowRedisSinkMapper.getCommandDescription()Lorg/apache/flink/streaming/connectors/redis/common/mapper/RedisCommandBaseDescription; @4: areturn
Reason:
Type 'org/apache/flink/streaming/connectors/redis/common/mapper/RedisCommandDescription' (current frame, stack[0]) is not assignable to 'org/apache/flink/streaming/connectors/redis/common/mapper/RedisCommandBaseDescription' (from method signature)
Current Frame:
bci: @4
flags: { }
locals: { 'org/apache/flink/streaming/connectors/redis/common/mapper/row/sink/RowRedisSinkMapper' }
stack: { 'org/apache/flink/streaming/connectors/redis/common/mapper/RedisCommandDescription' }
Bytecode:
0x0000000: 2ab6 0013 b0

I used sql-client,Flink version is 1.13.6.

Lettuce API Implicit Failure

Hi,
I've go through source code of RedisContainer, and may found a bug.

Since we relace Jedis as Lettuce as Redis client here, the error handling logic should be modify as well.
According to Lettuce: Error handling, if you use async api, then the error should be explicitly handled by apply handle() or exceptionally() method on the returned RedisFuture object, otherwise it will cause a implicit failure.

Discussion is welcome since I am new to async programming.

ttl怎么设置

hi,我看你代码里面有设置ttl的操作,但是flinksql怎么设置ttl啊,能给个具体的例子吗,谢谢

支持 Flink 1.15.x 下作为Lookup Table

现有程序跑在Flink 1.14.x下完全正常。但当跑在Flink 1.15.x下,会因为RedisDynamicTableFactory所依赖的Cache类是 com.google.common.cache.Cache 是旧版本Flink 1.14.x的包,新版本包路径已经改为:org.apache.flink.shaded.guava30.com.google.common.cache.Cache(依赖包是flink-shaded-guava),所以 flink-connector-redis 1.2.3 跑在 1.15.x 下时,如果只是作为sink是不会出错的,但当作为lookup table就会出错。

希望可以更新依赖包。

hash结构怎么查询

求助image
没看明白怎么查询hash结构,我数据里有两个Field,
维表是这么定义的
image
使用的时候报错
image

How to use HSET with table format like this

I have table structured like this, and i would like to use HSET on Table SQL. Do you support this or any suggestions

key field1 field2 field3
1 'a' 1 'aa'
2 'b' 1 'bb'
What i am trying to do is to run this command
HSET 1 field1 'a' field2 1 field3 'aa'
HSET 2 field1 'b' field2 1 field3 'bb'

连接器不支持连接哨兵模式集群?

Caused by: org.apache.flink.table.api.ValidationException: Unsupported options found for 'redis'.

Unsupported options:

master.name
sentinels.info
sentinels.password

Supported options:

cache.max-retries
cache.max-rows
cache.penetration.prevent
cache.ttl
cache.type
cluster-nodes
command
connector
database
field-column
host
key-column
lookup.hash.enable
lookup.redis.datatype
maxIdle
maxTotal
minIdle
password
port
property-version
put-if-absent
redis-mode
timeout
ttl
value-column

1.2.1版本sql-client查询数据报错

Flink SQL> select * from alarm_status_redis_dim;
[ERROR] Could not execute SQL statement. Reason:
org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There are not enough rules to produce a node with desired properties: convention=STREAM_PHYSICAL, FlinkRelDistributionTraitDef=any, MiniBatchIntervalTraitDef=None: 0, ModifyKindSetTraitDef=[NONE], UpdateKindTraitDef=[NONE].
Missing conversion is FlinkLogicalTableSourceScan[convention: LOGICAL -> STREAM_PHYSICAL]
There is 1 empty subset: rel#553:RelSubset#14.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE], the relevant part of the original plan is as follows
540:FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, alarm_status_redis_dim]], fields=[alarmKey, alarmStatus])

Root: rel#551:RelSubset#15.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE]
Original rel:
FlinkLogicalSink(subset=[rel#117:RelSubset#1.LOGICAL.any.None: 0.[NONE].[NONE]], table=[default_catalog.default_database.sink_redis], fields=[alarmKey, alarmStatus]): rowcount = 1.0E8, cumulative cost = {1.0E8 rows, 1.0E8 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 121
FlinkLogicalTableSourceScan(subset=[rel#120:RelSubset#0.LOGICAL.any.None: 0.[NONE].[NONE]], table=[[default_catalog, default_database, per_source]], fields=[alarmKey, alarmStatus]): rowcount = 1.0E8, cumulative cost = {1.0E8 rows, 1.0E8 cpu, 1.6E9 io, 0.0 network, 0.0 memory}, id = 119

Sets:
Set#14, type: RecordType(VARCHAR(2147483647) alarmKey, INTEGER alarmStatus)
rel#548:RelSubset#14.LOGICAL.any.None: 0.[NONE].[NONE], best=rel#540
rel#540:FlinkLogicalTableSourceScan.LOGICAL.any.None: 0.[NONE].[NONE](table=[default_catalog, default_database, alarm_status_redis_dim],fields=alarmKey, alarmStatus), rowcount=1.0E8, cumulative cost={1.0E8 rows, 1.0E8 cpu, 1.6E9 io, 0.0 network, 0.0 memory}
rel#553:RelSubset#14.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE], best=null
Set#15, type: RecordType(VARCHAR(2147483647) alarmKey, INTEGER alarmStatus)
rel#550:RelSubset#15.LOGICAL.any.None: 0.[NONE].[NONE], best=rel#549
rel#549:FlinkLogicalSink.LOGICAL.any.None: 0.[NONE].[NONE](input=RelSubset#548,table=anonymous_collect$2,fields=alarmKey, alarmStatus), rowcount=1.0E8, cumulative cost={2.0E8 rows, 2.0E8 cpu, 1.6E9 io, 0.0 network, 0.0 memory}
rel#551:RelSubset#15.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE], best=null
rel#552:AbstractConverter.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE](input=RelSubset#550,convention=STREAM_PHYSICAL,FlinkRelDistributionTraitDef=any,MiniBatchIntervalTraitDef=None: 0,ModifyKindSetTraitDef=[NONE],UpdateKindTraitDef=[NONE]), rowcount=1.0E8, cumulative cost={inf}
rel#554:StreamPhysicalSink.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE](input=RelSubset#553,table=anonymous_collect$2,fields=alarmKey, alarmStatus), rowcount=1.0E8, cumulative cost={inf}

Graphviz:
digraph G {
root [style=filled,label="Root"];
subgraph cluster14{
label="Set 14 RecordType(VARCHAR(2147483647) alarmKey, INTEGER alarmStatus)";
rel540 [label="rel#540:FlinkLogicalTableSourceScan\ntable=[default_catalog, default_database, alarm_status_redis_dim],fields=alarmKey, alarmStatus\nrows=1.0E8, cost={1.0E8 rows, 1.0E8 cpu, 1.6E9 io, 0.0 network, 0.0 memory}",color=blue,shape=box]
subset548 [label="rel#548:RelSubset#14.LOGICAL.any.None: 0.[NONE].[NONE]"]
subset553 [label="rel#553:RelSubset#14.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE]",color=red]
}
subgraph cluster15{
label="Set 15 RecordType(VARCHAR(2147483647) alarmKey, INTEGER alarmStatus)";
rel549 [label="rel#549:FlinkLogicalSink\ninput=RelSubset#548,table=anonymous_collect$2,fields=alarmKey, alarmStatus\nrows=1.0E8, cost={2.0E8 rows, 2.0E8 cpu, 1.6E9 io, 0.0 network, 0.0 memory}",color=blue,shape=box]
rel552 [label="rel#552:AbstractConverter\ninput=RelSubset#550,convention=STREAM_PHYSICAL,FlinkRelDistributionTraitDef=any,MiniBatchIntervalTraitDef=None: 0,ModifyKindSetTraitDef=[NONE],UpdateKindTraitDef=[NONE]\nrows=1.0E8, cost={inf}",shape=box]
rel554 [label="rel#554:StreamPhysicalSink\ninput=RelSubset#553,table=anonymous_collect$2,fields=alarmKey, alarmStatus\nrows=1.0E8, cost={inf}",shape=box]
subset550 [label="rel#550:RelSubset#15.LOGICAL.any.None: 0.[NONE].[NONE]"]
subset551 [label="rel#551:RelSubset#15.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE]"]
}
root -> subset551;
subset548 -> rel540[color=blue];
subset550 -> rel549[color=blue]; rel549 -> subset548[color=blue];
subset551 -> rel552; rel552 -> subset550;
subset551 -> rel554; rel554 -> subset553;
}

Compatible between save on flink and spark

It is nice if the save on flink was similar with the same on spark (redis)
Indeed the save on flink it saves only two fields in addition of the key while it is possible that there are many keys/fields on the same entity and it could be simply done by :
hset(. .)
hset(. .)
......

支持更新流写入么

定义redis的sink表代码:

  drop table if exists redis_sink;
  create table redis_sink ( 
    `additionalKey` STRING,
    `redisKey` STRING, 
    `redisValue` decimal(23, 8) ,
    
     PRIMARY KEY (`additionalKey`) NOT ENFORCED
  ) with ( 
    'connector' = 'redis', 
    'redis-mode' = 'cluster', 
    'cluster-nodes' = 'xxxx:6379', 
    'command' = 'HSET',
    'ttl'='1000'
    --'connector.property-version' = '1' 
  ); 
  insert into redis_sink 
  select
  'hellohash',
  create_time_day || userId as rediskey,
  
  sum(amount) as total_user_amount


    from (
        select 
        orderId,
        last_value(userId) as userId,
        last_value(amount) as amount,
        last_value(createTimestamp) as createTimestamp,
        REGEXP_EXTRACT(last_value(createTimestamp),'(.*?)[\sT](.+)',1) as `create_time_day`
        from test_topic_source
        where orderId is not null
        group by orderId
    ) group by create_time_day,userId

报错如下:
java.lang.IllegalStateException: please declare primary key for sink table when query contains update/delete record.
at org.apache.flink.util.Preconditions.checkState(Preconditions.java:193)
at org.apache.flink.streaming.connectors.redis.table.RedisDynamicTableSink.validatePrimaryKey(RedisDynamicTableSink.java:79)
at org.apache.flink.streaming.connectors.redis.table.RedisDynamicTableSink.getChangelogMode(RedisDynamicTableSink.java:53)
at org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visit(FlinkChangelogModeInferenceProgram.scala:125)
at org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram.optimize(FlinkChangelogModeInferenceProgram.scala:51)
at org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram.optimize(FlinkChangelogModeInferenceProgram.scala:40)
at org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)

sink redis 找不到 connector

1、flink 版本 1.14.3 ,已经去除blink依赖打包
image
image

2、报错日志
org.apache.flink.table.api.ValidationException: Unable to create a sink for writing table 'default_catalog.default_database.sink_redis'.

Table options are:

'command'='hset'
'connector'='redis'
'host'='xxx'
'password'='******'
'port'='6379'
'redis-mode'='single'
at org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:184)
at org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:388)
at org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:222)
at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:101)
at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:83)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at org.apache.flink.table.planner.delegation.StreamPlanner.explain(StreamPlanner.scala:83)
at org.apache.flink.table.planner.delegation.StreamPlanner.explain(StreamPlanner.scala:47)
at com.dlink.executor.CustomTableEnvironmentImpl.explainSqlRecord(CustomTableEnvironmentImpl.java:281)
at com.dlink.executor.Executor.explainSqlRecord(Executor.java:249)
at com.dlink.explainer.Explainer.explainSql(Explainer.java:215)
at com.dlink.job.JobManager.explainSql(JobManager.java:474)
at com.dlink.service.impl.StudioServiceImpl.explainFlinkSql(StudioServiceImpl.java:167)
at com.dlink.service.impl.StudioServiceImpl.explainSql(StudioServiceImpl.java:154)
at com.dlink.controller.StudioController.explainSql(StudioController.java:51)
at sun.reflect.GeneratedMethodAccessor537.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:909)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at com.alibaba.druid.support.http.WebStatFilter.doFilter(WebStatFilter.java:124)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:382)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:895)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1732)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a connector using option: 'connector'='redis'
at org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:587)
at org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:561)
at org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:180)
... 73 more
Caused by: org.apache.flink.table.api.ValidationException: Could not find any factory for identifier 'redis' that implements 'org.apache.flink.table.factories.DynamicTableFactory' in the classpath.

Available factory identifiers are:

blackhole
clickhouse
datagen
elasticsearch-6
filesystem
jdbc
kafka
mysql-cdc
print
upsert-kafka
at org.apache.flink.table.factories.FactoryUtil.discoverFactory(FactoryUtil.java:399)
at org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:583)
... 75 more

3、sink redis sql
image

1.3.1 版本 能写入数据,但是读取不到数据 要怎么处理

用自带例子 ,收到数据

-- 创建表
create table sink_redis(uid VARCHAR,score double,score2 double )
with ( 'connector' = 'redis',
'host' = '10.11.69.176',
'port' = '6379',
'redis-mode' = 'single',
'password' = '****',
'command' = 'SET',
'value.data.structure' = 'row'); -- 'value.data.structure'='row':整行内容保存至value并以'\01'分割
-- 写入测试数据,score、score2为需要被关联查询出的两个维度
insert into sink_redis select * from (values ('1', 10.3, 10.1));

-- 在redis中,value的值为: "1\x0110.3\x0110.1" --
-- 写入结束 --

-- create join table --
create table join_table with ('command'='get', 'value.data.structure'='row') like sink_redis

-- create result table --
create table result_table(uid VARCHAR, username VARCHAR, score double, score2 double) with ('connector'='print')

-- create source table --
create table source_table(uid VARCHAR, username VARCHAR, proc_time as procTime()) with ('connector'='datagen', 'fields.uid.kind'='sequence', 'fields.uid.start'='1', 'fields.uid.end'='2')

-- 关联查询维表,获得维表的多个字段值 --
insert
into
result_table
select
s.uid,
s.username,
j.score, -- 来自维表
j.score2 -- 来自维表
from
source_table as s
join join_table for system_time as of s.proc_time as j on
j.uid = s.uid

Use redis pipeline instead of JedisPool

Redis on spark is much faster than flink (where it should be faster on flink than spark) since it uses pipeline.
If it is possible to use pipeline on flink it will be faster

能在入redis中增加支持原始BYTES入redis的方法么?

能在入redis中增加原始BYTES,不转化为base64格式来存放。
这是我的代码。

    from pyflink.table import EnvironmentSettings, TableEnvironment, DataTypes
    from pyflink.table.udf import udf, udtf, TableFunction, ScalarFunction
    from pyflink.common import Row
    import pickle
    env_settings = EnvironmentSettings.in_streaming_mode()
    table_env = TableEnvironment.create(env_settings)
    statement_set = table_env.create_statement_set()
    configuration = table_env.get_config().get_configuration()
    configuration.set_string("pipeline.name", "测试将数据从数据库中同步到redis上-过程带特殊处理方法")
    configuration.set_string("execution.checkpointing.interval", "3s")

    class RedisForm(TableFunction):
        def __init__(self, key_name):
            self.key_name = key_name
        def eval(self, record: Row):
            redis_hash_value = pickle.dumps(record.as_dict())
            redis_hash_key = record[self.key_name]
            per_name = "manager-system:"
            redis_name = per_name + "qhdata_standard" + "." + "dim_addr_type" + ":" + "cur_code1"
            return redis_name, redis_hash_key, redis_hash_value

    table_env.execute_sql('''
            CREATE TABLE source_table (
            `rid` INT,
            update_time TIMESTAMP,
            create_time TIMESTAMP,
            cur_code STRING,
            cur_name STRING,
          PRIMARY KEY (`rid`)  NOT ENFORCED
        ) WITH (
            'connector' = 'mysql-cdc',
            'hostname'='192.168.1.1',
            'port' = '3308',
            'username'='root',
            'password'='123456',
            'database-name' = 'qhdata_standard',
            'table-name' = 'dim_addr_type',
            'server-time-zone' = 'Asia/Shanghai'
        );
    ''')

    table_env.execute_sql('''
            CREATE TABLE result_table (
            redis_name VARCHAR, 
            redis_key VARCHAR, 
            redis_value BYTES 
        ) WITH (
            'connector' = 'redis',
            'host'='192.168.1.1',
            'port' = '6379',
            'cluster-nodes' = '192.168.1.1:6379',
            'redis-mode' = 'cluster',
            'command'='hset'
        );
    ''')
    createForm = udtf(RedisForm("cur_code"), result_types=[DataTypes.STRING(), DataTypes.STRING(), DataTypes.BYTES()])
    source =table_env.from_path("source_table")
    result = source.flat_map(createForm)
    statement_set.add_insert("result_table", result)
    statement_set.execute()

在flink中计算完的数据格式为x'80049596000000000000007d9...
但是连接器会转为base64。所以对这种原始数据是BYTES能不能提供一种直接放进去的选择,不需要转换。
谢谢大佬了。

batch模式支持吗

您好,flink1.14下,batch模式sink无数据,插件支持batch模式吗

Flink sql表关联redis维表,redis中的数据更新了,但是flink计算时好像没有实时感知到

Flink sql表关联redis维表,redis中的数据更新了,但是flink计算时好像没有实时感知到,请问维表支持动态更新吗?如果支持的话该如何使用?
目前我的使用方式是select ...... from biz_table as a inner join redis_table as b for system_time as of a.proc_time as b on a.pair=b.pair and b.name='HASH_NAME'
其中的biz_table是一个临时表
redis_table是redis中的一个Hash,使用的redis command是hget
使用的版本是flink-connector-redis:1.2.1
flink版本1.14.5

直接使用redis数据 查询语句出错。报: Cannot generate a valid execution plan for the given query

看例子都是用一个自动生成数据的源连接查询的。为什么不直接查询?

` StringBuilder redisOutTable = new StringBuilder("");
redisOutTable.append("create table sink_redis (id varchar, login_account varchar, nick_name varchar) ");
redisOutTable.append("with ( 'connector'='redis', 'host'='127.0.0.1','port'='6379'," +
" 'redis-mode'='single','password'='','command'='hget'," +
" 'maxIdle'='2', 'minIdle'='1', 'lookup.cache.max-rows'='10', 'lookup.cache.ttl'='10', 'lookup.max-retries'='3')");
envTable.executeSql(redisOutTable.toString());

    envTable.executeSql("create table result_table(uid VARCHAR, login_account VARCHAR, nick_name VARCHAR) with ('connector'='print')");
    envTable.executeSql("insert into result_table select id, login_account, nick_name from sink_redis;");`

异常信息:Exception in thread "main" org.apache.flink.table.api.TableException: Cannot generate a valid execution plan for the given query:

FlinkLogicalSink(table=[default_catalog.default_database.result_table], fields=[id, login_account, nick_name])
+- FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, sink_redis]], fields=[id, login_account, nick_name])

This exception indicates that the query uses an unsupported SQL feature.
Please check the documentation for the set of currently supported SQL features.
at org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:70)
at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:59)

请问下不支持redis sentinel模式吗

sql客户端报错
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table fact ory for 'org.apache.flink.table.factories.TableSourceFactory' in
the classpath.

异常问题

Caused by: java.lang.VerifyError: Bad type on operand stack
Exception Details:
Location:
io/lettuce/core/resource/AddressResolverGroupProvider$DefaultDnsAddressResolverGroupWrapper.()V @51: putstatic
Reason:
Type 'io/netty/resolver/dns/DnsAddressResolverGroup' (current frame, stack[0]) is not assignable to 'io/netty/resolver/AddressResolverGroup'
Current Frame:
bci: @51
flags: { }
locals: { }
stack: { 'io/netty/resolver/dns/DnsAddressResolverGroup' }
Bytecode:
0x0000000: bb00 0259 bb00 0359 b700 04b8 0005 b600
0x0000010: 06b8 0007 1208 b600 09b6 000a bb00 0b59
0x0000020: b700 0cb6 000d bb00 0e59 b700 0fb6 0010
0x0000030: b700 11b3 0012 b1

at io.lettuce.core.resource.AddressResolverGroupProvider.<clinit>(AddressResolverGroupProvider.java:35)
at io.lettuce.core.resource.DefaultClientResources.<clinit>(DefaultClientResources.java:112)
at io.lettuce.core.AbstractRedisClient.<init>(AbstractRedisClient.java:122)
at io.lettuce.core.RedisClient.<init>(RedisClient.java:99)
at io.lettuce.core.RedisClient.create(RedisClient.java:136)
at org.apache.flink.streaming.connectors.redis.common.container.RedisCommandsContainerBuilder.build(RedisCommandsContainerBuilder.java:65)
at org.apache.flink.streaming.connectors.redis.common.container.RedisCommandsContainerBuilder.build(RedisCommandsContainerBuilder.java:34)
at org.apache.flink.streaming.connectors.redis.table.RedisSinkFunction.open(RedisSinkFunction.java:281)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:100)
at org.apache.flink.table.runtime.operators.sink.SinkOperator.open(SinkOperator.java:58)
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:110)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:711)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:687)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:654)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
at java.lang.Thread.run(Thread.java:748)

flink版本1.14.5,connector使用的是git提供:https://github.com/jeff-zou/flink-connector-redis/releases/tag/1.2.5

hset命令下ttl不断重置的问题

写flink sql,使用hset命令,同时配置了ttl参数,但发现每次hset同一个key的时候,ttl会被重置,极限情况下,这个key可能永远都不会过期了,有没有相关参数可以避免这个情况

Setting `command` option in TableDescriptor leads to Java error

sink = (
        TableDescriptor.for_connector("redis")
        .option("host", "redis")
        .option("port", "6379")
        .option("database", "1")
        .option("redis-mode", "single")
        .option("command", "set")
        .schema(
        Schema.new_builder()
            .column("k_", DataTypes.STRING())
            .column("v_", DataTypes.STRING())
            .build())
        .build()
    )
    statement_set.add_insert(sink, table)
    statement_set.attach_as_datastream()

When I tried to do like this, I receive the following error:

Caused by: java.lang.UnsupportedOperationException
        at java.base/java.util.Collections$UnmodifiableMap.put(Collections.java:1457)
        at org.apache.flink.streaming.connectors.redis.table.RedisDynamicTableFactory.createDynamicTableSink(RedisDynamicTableFactory.java:52)
        at org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:319)
        ... 28 more

This looks like we cannot modify a Collection Map type. I traced back the error and seems the following block is the culprit:

if (context.getCatalogTable().getOptions().containsKey(REDIS_COMMAND)) {
            context.getCatalogTable()
                    .getOptions()
                    .put(
                            REDIS_COMMAND,
                            context.getCatalogTable()
                                    .getOptions()
                                    .get(REDIS_COMMAND)
                                    .toUpperCase());
        }

Please help me with this issue.

导入依赖运行test示例报错

我按照文档里面的说明把依赖导入idea里面,跑test里面的 set任务报错:

image

Caused by: java.lang.NoSuchMethodError: io.netty.util.CharsetUtil.encoder(Ljava/nio/charset/Charset;)Ljava/nio/charset/CharsetEncoder;

我引入netty-common 也不行(我看里面有netty-util),redis的连接信息没问题,帮忙看看吧,谢谢

Redis Sink 支持批量提交和刷新提交吗?

hello,看你DataStreamTest 中的样例,中显示支持Sink 批量提交和时间刷新,但是table 的方式没有?

            new RedisCacheOptions.Builder().setCacheMaxSize(100).setCacheTTL(10L).build();

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.