- flink 1.13.2
- kafka 2.6.2
思路与环境
从kafka中读取数据 根据逻辑判断分配到不同的topic中去
需要重写flink kafka的key序列化器,并通过加入自己的逻辑主动往指定的topic发送消息。
首先配置kafka信息
properties props = new properties();
props.put("bootstrap.servers","10.116.0.16:9092");
props.put("acks", "all");
props.put("retries", 1);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.stringserializer");
props.put(producerconfig.value_serializer_class_config, "org.apache.kafka.common.serialization.stringserializer");
注意这里key序列化器和value序列化器都为stringserializer
flink kafka连接器
flinkkafkaproducer<flinkjobbo> fkproducer =
new flinkkafkaproducer<>("", new mykeyserialization(), props, flinkkafkaproducer.semantic.exactly_once);
其中mykeyserialization便是重写的key序列化器
自定义序列化器
public class mykeyserialization implements kafkaserializationschema<flinkjobbo> {
string topic;
public mykeyserialization(string topic){
this.topic = topic;
}
public mykeyserialization(){
}
// 注意:都是byte[]类型,所以我们要重新指定新的序列化器
@override
public producerrecord<byte[], byte[]> serialize(flinkjobbo flinkjobbo, @nullable long along) {
// 根据自身的逻辑条件
jsonutils.setobjectmapper(new objectmapper());
if("1".equals(flinkjobbo.getapimodel())){
// 动态生成topic
return new producerrecord<>("topic-"+flinkjobbo.getgroupid(), jsonutils.tojson(flinkjobbo).getbytes(standardcharsets.utf_8));
}
return new producerrecord<>("", "".getbytes(standardcharsets.utf_8));
}
}
需要把
props.put(producerconfig.value_serializer_class_config, "org.apache.kafka.common.serialization.stringserializer");
替换为
props.put(producerconfig.value_serializer_class_config, "org.apache.kafka.common.serialization.bytearrayserializer");
完整代码
// 创建flink stream执行环境
streamexecutionenvironment env = streamexecutionenvironment.getexecutionenvironment();
env.setparallelism(1);
// 1、设置默认topic
string topic = "test";
// 2. 从kafka获取流数据
properties props = new properties();
props.put("bootstrap.servers","10.116.0.16:9092");
props.put("acks", "all");
props.put("retries", 1);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.stringserializer");
props.put(producerconfig.value_serializer_class_config, "org.apache.kafka.common.serialization.stringserializer");
// 从kafka中消费数据
datastreamsource<string> kafkadatastream =
env.addsource(new flinkkafkaconsumer<>(topic, new simplestringschema(), props));
// 3. 针对流做处理 把string转成bo 主流
datastream<flinkjobbo> ds = kafkadatastream
.map((mapfunction<string, flinkjobbo>) s -> jsonutils.tobean(s, flinkjobbo.class));
// 修改value序列化器
props.put(producerconfig.value_serializer_class_config, "org.apache.kafka.common.serialization.bytearrayserializer");
// 4.1.1 自定义序列化器 分配topic *****
flinkkafkaproducer<flinkjobbo> fkproducer =
new flinkkafkaproducer<>("", new mykeyserialization(), props, flinkkafkaproducer.semantic.exactly_once);
fkproducer.setlogfailuresonly(false);
ds.addsink(fkproducer);
env.execute();
总结
以上为个人经验,希望能给大家一个参考,也希望大家多多支持代码网。
发表评论