当前位置: 代码网 > it编程>数据库>MsSqlserver > FlinkCDC全量及增量采集SqlServer数据

FlinkCDC全量及增量采集SqlServer数据

2024年08月02日 MsSqlserver 我要评论
本文详细介绍Flink-CDC如何全量及增量采集Sqlserver数据源.

一、sqlserver的安装及开启事务日志

如果没有sqlserver环境,但你又想学习这块的内容,那你只能自己动手通过docker安装一个 myself sqlserver来用作学习,当然,如果你有现成环境,那就检查一下sqlserver是否开启了代理(sqlagent.enabled)服务和cdc功能。

1.1 docker拉取镜像

github上写flink-cdc 目前支持的sqlserver版本为2012, 2014, 2016, 2017, 2019,但我想全部拉到最新(事实证明,2022-latest 和latest是一样的,因为imagid都是一致的,且在后续测试也是没有问题的),所以我在docker上拉取镜像时,直接采用如下命令:

docker pull mcr.microsoft.com/mssql/server:latest
1.2 运行sqlserver并设置代理

标准启动模式,没什么好说的,主要设置一下密码(密码要求比较严格,建议直接在网上搜个随机密码生成器来搞一下)。

docker run -e 'accept_eula=y' -e 'sa_password=${your_password}' \
   -p 1433:1433 --name sqlserver \
   -d mcr.microsoft.com/mssql/server:latest

设置代理sqlagent.enabled,代理设置完成后,需要重启sqlserver,因为我们是docker安装的,直接用docker restart sqlserver就行了。

[root@hdp-01 ~]# docker exec -it --user root sqlserver bash
root@0274812d0c10:/# /opt/mssql/bin/mssql-conf set sqlagent.enabled true
sql server needs to be restarted in order to apply this setting. please run
'systemctl restart mssql-server.service'.
root@0274812d0c10:/# exit
exit
[root@hdp-01 ~]# docker restart sqlserver
sqlserver
1.3 启用cdc功能

按照如下步骤执行命令,如果看到is_cdc_enabled = 1,则说明当前数据库

root@0274812d0c10:/# /opt/mssql-tools/bin/sqlcmd -s localhost -u sa -p "${your_password}"
1> create databases test;
2> go
1> use test;
2> go
changed database context to 'test'.
1> exec sys.sp_cdc_enable_db;
2> go
1> select is_cdc_enabled from sys.databases where name = 'test';
2> go
is_cdc_enabled
--------------
             1

(1 rows affected)
1> create table t_info (id int,order_date date,purchaser int,quantity int,product_id int,primary key ([id]))
2> go
1> 
2> 
3> exec sys.sp_cdc_enable_table
4> @source_schema = 'dbo',
5> @source_name   = 't_info',
6> @role_name     = 'cdc_role';
7> go
update mask evaluation will be disabled in net_changes_function because the clr configuration option is disabled.
job 'cdc.zeus_capture' started successfully.
job 'cdc.zeus_cleanup' started successfully.
1> select * from t_info;
2> go
id          order_date       purchaser   quantity    product_id 
----------- ---------------- ----------- ----------- -----------

(0 rows affected)
1.4 检查cdc是否正常开启

用客户端连接sqlserver,查看test库下的information_schema.tables中是否出现table_schema = cdc的表,如果出现,说明已经成功安装sqlserver并启用了cdc

1> use test;
2> go
changed database context to 'test'.
1> select * from information_schema.tables;
2> go
table_catalog	table_schema	table_name	       table_type
test	            dbo	      user_info	         base table
test	            dbo	      systranschemas	   base table
test	            cdc	      change_tables	     base table
test	            cdc	      ddl_history	       base table
test	            cdc	      lsn_time_mapping	 base table
test	            cdc	      captured_columns	 base table
test	            cdc	      index_columns	     base table
test	            dbo	      orders	           base table
test	            cdc	      dbo_orders_ct	     base table

二、具体实现

2.1 flik-cdc采集sqlserver主程序

添加依赖包:

        <dependency>
            <groupid>com.ververica</groupid>
            <artifactid>flink-connector-sqlserver-cdc</artifactid>
            <version>3.0.0</version>
        </dependency>

编写主函数:

    public static void main(string[] args) throws exception {

        streamexecutionenvironment env = streamexecutionenvironment.getexecutionenvironment();

        // 设置全局并行度
        env.setparallelism(1);
        // 设置时间语义为processingtime
        env.getconfig().setautowatermarkinterval(0);
        // 每隔60s启动一个检查点
        env.enablecheckpointing(60000, checkpointingmode.exactly_once);
        // checkpoint最小间隔
        env.getcheckpointconfig().setminpausebetweencheckpoints(1000);
        // checkpoint超时时间
        env.getcheckpointconfig().setcheckpointtimeout(60000);
        // 同一时间只允许一个checkpoint
        // env.getcheckpointconfig().setmaxconcurrentcheckpoints(1);
        // flink处理程序被cancel后,会保留checkpoint数据
        //   env.getcheckpointconfig().setexternalizedcheckpointcleanup(checkpointconfig.externalizedcheckpointcleanup.retain_on_cancellation);


        sourcefunction<string> sqlserversource = sqlserversource.<string>builder()
                .hostname("localhost")
                .port(1433)
                .username("sa")
                .password("")
                .database("test")
                .tablelist("dbo.t_info")
                .startupoptions(startupoptions.initial())
                .debeziumproperties(getdebeziumproperties())
                .deserializer(new customerdeserializationschemasqlserver())
                .build();

        datastreamsource<string> datastreamsource = env.addsource(sqlserversource, "_transaction_log_source");
        datastreamsource.print().setparallelism(1);
        env.execute("sqlserver-cdc-test");

    }
    
    
        public static properties getdebeziumproperties() {
        properties properties = new properties();
        properties.put("converters", "sqlserverdebeziumconverter");
        properties.put("sqlserverdebeziumconverter.type", "sqlserverdebeziumconverter");
        properties.put("sqlserverdebeziumconverter.database.type", "sqlserver");
        // 自定义格式,可选
        properties.put("sqlserverdebeziumconverter.format.datetime", "yyyy-mm-dd hh:mm:ss");
        properties.put("sqlserverdebeziumconverter.format.date", "yyyy-mm-dd");
        properties.put("sqlserverdebeziumconverter.format.time", "hh:mm:ss");
        return properties;
    }
2.2 自定义sqlserver反序列化格式:

flink-cdc底层技术为debezium,它捕获到sqlserver数据变更(crud)的数据格式如下:

#初始化
struct{after=struct{id=1,order_date=2024-01-30,purchaser=1,quantity=100,product_id=1},source=struct{version=1.9.7.final,connector=sqlserver,name=sqlserver_transaction_log_source,ts_ms=1706574924473,snapshot=true,db=zeus,schema=dbo,table=orders,commit_lsn=0000002b:00002280:0003},op=r,ts_ms=1706603724432}

#新增
struct{after=struct{id=12,order_date=2024-01-11,purchaser=6,quantity=233,product_id=63},source=struct{version=1.9.7.final,connector=sqlserver,name=sqlserver_transaction_log_source,ts_ms=1706603786187,db=zeus,schema=dbo,table=orders,change_lsn=0000002b:00002480:0002,commit_lsn=0000002b:00002480:0003,event_serial_no=1},op=c,ts_ms=1706603788461}


#更新
struct{before=struct{id=12,order_date=2024-01-11,purchaser=6,quantity=233,product_id=63},after=struct{id=12,order_date=2024-01-11,purchaser=8,quantity=233,product_id=63},source=struct{version=1.9.7.final,connector=sqlserver,name=sqlserver_transaction_log_source,ts_ms=1706603845603,db=zeus,schema=dbo,table=orders,change_lsn=0000002b:00002500:0002,commit_lsn=0000002b:00002500:0003,event_serial_no=2},op=u,ts_ms=1706603850134}


#删除
struct{before=struct{id=11,order_date=2024-01-11,purchaser=6,quantity=233,product_id=63},source=struct{version=1.9.7.final,connector=sqlserver,name=sqlserver_transaction_log_source,ts_ms=1706603973023,db=zeus,schema=dbo,table=orders,change_lsn=0000002b:000025e8:0002,commit_lsn=0000002b:000025e8:0005,event_serial_no=1},op=d,ts_ms=1706603973859}

因此,可以根据自己需要自定义反序列化格式,将数据按照标准统一数据输出,下面是我自定义的格式,供大家参考:

import com.alibaba.fastjson2.json;
import com.alibaba.fastjson2.jsonobject;
import com.alibaba.fastjson2.jsonwriter;
import com.ververica.cdc.debezium.debeziumdeserializationschema;
import io.debezium.data.envelope;
import org.apache.flink.api.common.typeinfo.basictypeinfo;
import org.apache.flink.api.common.typeinfo.typeinformation;
import org.apache.flink.util.collector;
import org.apache.kafka.connect.data.field;
import org.apache.kafka.connect.data.schema;
import org.apache.kafka.connect.data.struct;
import org.apache.kafka.connect.source.sourcerecord;

import java.util.hashmap;
import java.util.map;

public class customerdeserializationschemasqlserver implements debeziumdeserializationschema<string> {

    private static final long serialversionuid = -1l;


    @override
    public void deserialize(sourcerecord sourcerecord, collector collector) {
        map<string, object> resultmap = new hashmap<>();
        string topic = sourcerecord.topic();
        string[] split = topic.split("[.]");
        string database = split[1];
        string table = split[2];
        resultmap.put("db", database);
        resultmap.put("tablename", table);
        //获取操作类型
        envelope.operation operation = envelope.operationfor(sourcerecord);
        //获取数据本身
        struct struct = (struct) sourcerecord.value();
        struct after = struct.getstruct("after");
        struct before = struct.getstruct("before");
        string op = operation.name();
        resultmap.put("op", op);

        //新增,更新或者初始化
        if (op.equals(envelope.operation.create.name()) || op.equals(envelope.operation.read.name()) || op.equals(envelope.operation.update.name())) {
            jsonobject afterjson = new jsonobject();
            if (after != null) {
                schema schema = after.schema();
                for (field field : schema.fields()) {
                    afterjson.put(field.name(), after.get(field.name()));
                }
                resultmap.put("after", afterjson);
            }
        }

        if (op.equals(envelope.operation.delete.name())) {
            jsonobject beforejson = new jsonobject();
            if (before != null) {
                schema schema = before.schema();
                for (field field : schema.fields()) {
                    beforejson.put(field.name(), before.get(field.name()));
                }
                resultmap.put("before", beforejson);
            }
        }

        collector.collect(json.tojsonstring(resultmap, jsonwriter.feature.fieldbased, jsonwriter.feature.largeobject));

    }

    @override
    public typeinformation<string> getproducedtype() {
        return basictypeinfo.string_type_info;
    }

}
2.3 自定义日期格式转换器

debezium会将日期转为5位数字,日期时间转为13位的数字,因此我们需要根据sqlserver的日期类型转换成标准的时期或者时间格式。sqlserver的日期类型主要包含以下几种:

字段类型快照类型(jdbc type)cdc类型(jdbc type)
datejava.sql.date(91)java.sql.date(91)
timejava.sql.timestamp(92)java.sql.time(92)
datetimejava.sql.timestamp(93)java.sql.timestamp(93)
datetime2java.sql.timestamp(93)java.sql.timestamp(93)
datetimeoffsetmicrosoft.sql.datetimeoffset(-155)microsoft.sql.datetimeoffset(-155)
smalldatetimejava.sql.timestamp(93)java.sql.timestamp(93)
import io.debezium.spi.converter.customconverter;
import io.debezium.spi.converter.relationalcolumn;
import org.apache.kafka.connect.data.schemabuilder;
import java.time.zoneoffset;
import java.time.format.datetimeformatter;
import java.util.properties;

@sl4j
public class sqlserverdebeziumconverter implements customconverter<schemabuilder, relationalcolumn> {



    private static final string date_format = "yyyy-mm-dd";
    private static final string time_format = "hh:mm:ss";
    private static final string datetime_format = "yyyy-mm-dd hh:mm:ss";
    private datetimeformatter dateformatter;
    private datetimeformatter timeformatter;
    private datetimeformatter datetimeformatter;
    private schemabuilder schemabuilder;
    private string databasetype;
    private string schemanameprefix;


    @override
    public void configure(properties properties) {
        // 必填参数:database.type,只支持sqlserver
        this.databasetype = properties.getproperty("database.type");
        // 如果未设置,或者设置的不是mysql、sqlserver,则抛出异常。
        if (this.databasetype == null || !this.databasetype.equals("sqlserver"))) {
            throw new illegalargumentexception("database.type 必须设置为'sqlserver'");
        }
        // 选填参数:format.date、format.time、format.datetime。获取时间格式化的格式
        string dateformat = properties.getproperty("format.date", date_format);
        string timeformat = properties.getproperty("format.time", time_format);
        string datetimeformat = properties.getproperty("format.datetime", datetime_format);
        // 获取自身类的包名+数据库类型为默认schema.name
        string classname = this.getclass().getname();
        // 查看是否设置schema.name.prefix
        this.schemanameprefix = properties.getproperty("schema.name.prefix", classname + "." + this.databasetype);
        // 初始化时间格式化器
        dateformatter = datetimeformatter.ofpattern(dateformat);
        timeformatter = datetimeformatter.ofpattern(timeformat);
        datetimeformatter = datetimeformatter.ofpattern(datetimeformat);

    }

    // sqlserver的转换器
    public void registersqlserverconverter(string columntype, converterregistration<schemabuilder> converterregistration) {
        string schemaname = this.schemanameprefix + "." + columntype.tolowercase();
        schemabuilder = schemabuilder.string().name(schemaname);
        switch (columntype) {
            case "date":
                converterregistration.register(schemabuilder, value -> {
                    if (value == null) {
                        return null;
                    } else if (value instanceof java.sql.date) {
                        return dateformatter.format(((java.sql.date) value).tolocaldate());
                    } else {
                        return this.failconvert(value, schemaname);
                    }
                });
                break;
            case "time":
                converterregistration.register(schemabuilder, value -> {
                    if (value == null) {
                        return null;
                    } else if (value instanceof java.sql.time) {
                        return timeformatter.format(((java.sql.time) value).tolocaltime());
                    } else if (value instanceof java.sql.timestamp) {
                        return timeformatter.format(((java.sql.timestamp) value).tolocaldatetime().tolocaltime());
                    } else {
                        return this.failconvert(value, schemaname);
                    }
                });
                break;
            case "datetime":
            case "datetime2":
            case "smalldatetime":
            case "datetimeoffset":
                converterregistration.register(schemabuilder, value -> {
                    if (value == null) {
                        return null;
                    } else if (value instanceof java.sql.timestamp) {
                        return datetimeformatter.format(((java.sql.timestamp) value).tolocaldatetime());
                    } else if (value instanceof microsoft.sql.datetimeoffset) {
                        microsoft.sql.datetimeoffset datetimeoffset = (microsoft.sql.datetimeoffset) value;
                        return datetimeformatter.format(
                                datetimeoffset.getoffsetdatetime().withoffsetsameinstant(zoneoffset.utc).tolocaldatetime());
                    } else {
                        return this.failconvert(value, schemaname);
                    }
                });
                break;
            default:
                schemabuilder = null;
                break;
        }
    }


    @override
    public void converterfor(relationalcolumn relationalcolumn, converterregistration<schemabuilder> converterregistration) {
        // 获取字段类型
        string columntype = relationalcolumn.typename().touppercase();
        // 根据数据库类型调用不同的转换器
        if (this.databasetype.equals("sqlserver")) {
            this.registersqlserverconverter(columntype, converterregistration);
        } else {
            log.warn("不支持的数据库类型: {}", this.databasetype);
            schemabuilder = null;
        }
    }

    private string getclassname(object value) {
        if (value == null) {
            return null;
        }
        return value.getclass().getname();
    }

    // 类型转换失败时的日志打印
    private string failconvert(object value, string type) {
        string valueclass = this.getclassname(value);
        string valuestring = valueclass == null ? null : value.tostring();
        return valuestring;
    }
}

三、总计

目前fink-cdc对这种增量采集传统数据库的技术已经封装的很好了,并且官方也给了详细的操作教程,但如果想要深入的学习一项技能,个人觉得还是要从头到尾操作一遍,一方面能够快速的提升自己,另一方面发现问题时,也能从不同的角度来思考解决方案,希望本篇文章能够给大家带来一点帮助。

(0)

相关文章:

版权声明:本文内容由互联网用户贡献,该文观点仅代表作者本人。本站仅提供信息存储服务,不拥有所有权,不承担相关法律责任。 如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 2386932994@qq.com 举报,一经查实将立刻删除。

发表评论

验证码:
Copyright © 2017-2025  代码网 保留所有权利. 粤ICP备2024248653号
站长QQ:2386932994 | 联系邮箱:2386932994@qq.com