当前位置: 代码网 > it编程>数据库>Oracle > 如何使用Flink CDC实现 Oracle数据库数据同步

如何使用Flink CDC实现 Oracle数据库数据同步

2024年08月21日 Oracle 我要评论
前言flink cdc 是一个基于流的数据集成工具,旨在为用户提供一套功能更加全面的编程接口(api)。 该工具使得用户能够以 yaml 配置文件的形式实现数据库同步,同时也提供了flink cdc

前言

flink cdc 是一个基于流的数据集成工具,旨在为用户提供一套功能更加全面的编程接口(api)。 该工具使得用户能够以 yaml 配置文件的形式实现数据库同步,同时也提供了flink cdc source connector api。 flink cdc 在任务提交过程中进行了优化,并且增加了一些高级特性,如表结构变更自动同步(schema evolution)、数据转换(data transformation)、整库同步(full database synchronization)以及 精确一次(exactly-once)语义。
本文通过flink-connector-oracle-cdc来实现oracle数据库的数据同步。

一、开启归档日志

1)数据库服务器终端,使用sysdba角色连接数据库

 sqlplus / as sysdba
或
sqlplus /nolog
connect sys/password as sysdba;

2)检查归档日志是否开启

archive log list;

(“database log mode: no archive mode”,日志归档未开启)
(“database log mode: archive mode”,日志归档已开启)
3)启用归档日志

alter system set db_recovery_file_dest_size = 10g;
alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile;
shutdown immediate;
startup mount;
alter database archivelog;
alter database open;

注意:
启用归档日志需要重启数据库。
归档日志会占用大量的磁盘空间,应定期清除过期的日志文件
4)启动完成后重新执行 archive log list; 查看归档打开状态

二、创建flinkcdc专属用户

2.1 对于oracle 非cdb数据库,执行如下sql

  create user flinkuser identified by flinkpw default tablespace logminer_tbs quota unlimited on logminer_tbs;
  grant create session to flinkuser;
  grant set container to flinkuser;
  grant select on v_$database to flinkuser;
  grant flashback any table to flinkuser;
  grant select any table to flinkuser;
  grant select_catalog_role to flinkuser;
  grant execute_catalog_role to flinkuser;
  grant select any transaction to flinkuser;
  grant logmining to flinkuser;
  grant analyze any to flinkuser;
  grant create table to flinkuser;
  -- need not to execute if set scan.incremental.snapshot.enabled=true(default)
  grant lock any table to flinkuser;
  grant alter any table to flinkuser;
  grant create sequence to flinkuser;
  grant execute on dbms_logmnr to flinkuser;
  grant execute on dbms_logmnr_d to flinkuser;
  grant select on v_$log to flinkuser;
  grant select on v_$log_history to flinkuser;
  grant select on v_$logmnr_logs to flinkuser;
  grant select on v_$logmnr_contents to flinkuser;
  grant select on v_$logmnr_parameters to flinkuser;
  grant select on v_$logfile to flinkuser;
  grant select on v_$archived_log to flinkuser;
  grant select on v_$archive_dest_status to flinkuser;

2.2 对于oracle cdb数据库,执行如下sql

  create user flinkuser identified by flinkpw default tablespace logminer_tbs quota unlimited on logminer_tbs container=all;
  grant create session to flinkuser container=all;
  grant set container to flinkuser container=all;
  grant select on v_$database to flinkuser container=all;
  grant flashback any table to flinkuser container=all;
  grant select any table to flinkuser container=all;
  grant select_catalog_role to flinkuser container=all;
  grant execute_catalog_role to flinkuser container=all;
  grant select any transaction to flinkuser container=all;
  grant logmining to flinkuser container=all;
  grant create table to flinkuser container=all;
  -- need not to execute if set scan.incremental.snapshot.enabled=true(default)
  grant lock any table to flinkuser container=all;
  grant create sequence to flinkuser container=all;
  grant execute on dbms_logmnr to flinkuser container=all;
  grant execute on dbms_logmnr_d to flinkuser container=all;
  grant select on v_$log to flinkuser container=all;
  grant select on v_$log_history to flinkuser container=all;
  grant select on v_$logmnr_logs to flinkuser container=all;
  grant select on v_$logmnr_contents to flinkuser container=all;
  grant select on v_$logmnr_parameters to flinkuser container=all;
  grant select on v_$logfile to flinkuser container=all;
  grant select on v_$archived_log to flinkuser container=all;
  grant select on v_$archive_dest_status to flinkuser container=all;

三、指定oracle表、库级启用

-- 指定表启用补充日志记录:
alter table databasename.tablename add supplemental log data (all) columns;
-- 为数据库的所有表启用
alter database add supplemental log data (all) columns;
-- 指定数据库启用补充日志记录
alter database add supplemental log data;

四、使用flink-connector-oracle-cdc实现数据库同步

4.1 引入pom依赖

 <dependency>
     <groupid>com.ververica</groupid>
     <artifactid>flink-connector-oracle-cdc</artifactid>
     <version>2.4.0</version>
 </dependency>

4.2 java主代码

package test.datastream.cdc.oracle;
import com.ververica.cdc.connectors.oracle.oraclesource;
import com.ververica.cdc.debezium.jsondebeziumdeserializationschema;
import org.apache.flink.streaming.api.datastream.datastreamsource;
import org.apache.flink.streaming.api.datastream.singleoutputstreamoperator;
import org.apache.flink.streaming.api.environment.streamexecutionenvironment;
import org.apache.flink.streaming.api.functions.source.sourcefunction;
import org.apache.flink.streaming.api.windowing.assigners.tumblingprocessingtimewindows;
import org.apache.flink.streaming.api.windowing.time.time;
import org.apache.flink.types.row;
import test.datastream.cdc.oracle.function.cachedataallwindowfunction;
import test.datastream.cdc.oracle.function.cdcstring2rowmap;
import test.datastream.cdc.oracle.function.dbcdcsinkfunction;
import java.util.properties;
public class oraclecdcexample {
    public static void main(string[] args) throws exception {
        properties properties = new properties();
        //数字类型数据 转换为字符
        properties.setproperty("decimal.handling.mode", "string");
        sourcefunction<string> sourcefunction = oraclesource.<string>builder()
//                .startupoptions(startupoptions.latest()) // 从最晚位点启动
                .url("jdbc:oracle:thin:@localhost:1521:orcl")
                .port(1521)
                .database("orcl") // monitor xe database
                .schemalist("c##flink_user") // monitor inventory schema
                .tablelist("c##flink_user.test2") // monitor products table
                .username("c##flink_user")
                .password("flinkpw")
                .debeziumproperties(properties)
                .deserializer(new jsondebeziumdeserializationschema()) // converts sourcerecord to json string
                .build();
        streamexecutionenvironment env = streamexecutionenvironment.getexecutionenvironment();
        datastreamsource<string> source = env.addsource(sourcefunction).setparallelism(1);// use parallelism 1 for sink to keep message ordering
        singleoutputstreamoperator<row> mapstream = source.flatmap(new cdcstring2rowmap());
        singleoutputstreamoperator<row[]> winstream = mapstream.windowall(tumblingprocessingtimewindows.of(time.seconds(5)))
                .process(new cachedataallwindowfunction());
		//批量同步
        winstream.addsink(new dbcdcsinkfunction(null));
        env.execute();
    }
}

4.3json转换为row

package test.datastream.cdc.oracle.function;
import cn.com.victorysoft.common.configuration.vsconfiguration;
import org.apache.flink.api.common.functions.richflatmapfunction;
import org.apache.flink.configuration.configuration;
import org.apache.flink.types.row;
import org.apache.flink.types.rowkind;
import org.apache.flink.util.collector;
import test.datastream.cdc.cdcconstants;
import java.sql.timestamp;
import java.util.hashmap;
import java.util.map;
import java.util.set;
/**
 * @desc cdc json解析,并转换为row
 */
public class cdcstring2rowmap extends richflatmapfunction<string, row> {
    private map<string,integer> columnmap =new hashmap<>();
    @override
    public void open(configuration parameters) throws exception {
        columnmap.put("id",0);
        columnmap.put("name",1);
        columnmap.put("description",2);
        columnmap.put("age",3);
        columnmap.put("create_time",4);
        columnmap.put("score",5);
        columnmap.put("c_1",6);
        columnmap.put("b_1",7);
    }
    @override
    public void flatmap(string s, collector<row> collector) throws exception {
        system.out.println("receive: "+s);
        vsconfiguration conf=vsconfiguration.from(s);
        string op = conf.getstring(cdcconstants.k_op);
        vsconfiguration before = conf.getconfiguration(cdcconstants.k_before);
        vsconfiguration after = conf.getconfiguration(cdcconstants.k_after);
        row row =null;
        if(cdcconstants.op_c.equals(op)){
            //插入,使用after数据
            row = converttorow(after);
            row.setkind(rowkind.insert);
        }else if(cdcconstants.op_u.equals(op)){
            //更新,使用after数据
            row = converttorow(after);
            row.setkind(rowkind.update_after);
        }else if(cdcconstants.op_d.equals(op)){
            //删除,使用before数据
            row = converttorow(before);
            row.setkind(rowkind.delete);
        }else {
            //r 操作,使用after数据
            row = converttorow(after);
            row.setkind(rowkind.insert);
        }
        collector.collect(row);
    }
    private row converttorow(vsconfiguration data){
        set<string> keys = data.getkeys();
        int size = keys.size();
        row row=new row(8);
        int i=0;
        for (string key:keys) {
            integer index = this.columnmap.get(key);
            object value=data.get(key);
            if(key.equals("create_time")){
                //long日期转timestamp
                value=long2timestamp((long)value);
            }
            row.setfield(index,value);
        }
        return row;
    }
    private static  java.sql.timestamp long2timestamp(long time){
        timestamp timestamp = new timestamp(time/1000);
        system.out.println(timestamp);
        return timestamp;
    }
}

到此这篇关于使用flink cdc实现 oracle数据库数据同步的文章就介绍到这了,更多相关flink cdc oracle数据同步内容请搜索代码网以前的文章或继续浏览下面的相关文章希望大家以后多多支持代码网!

(0)

相关文章:

版权声明:本文内容由互联网用户贡献,该文观点仅代表作者本人。本站仅提供信息存储服务,不拥有所有权,不承担相关法律责任。 如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 2386932994@qq.com 举报,一经查实将立刻删除。

发表评论

验证码:
Copyright © 2017-2025  代码网 保留所有权利. 粤ICP备2024248653号
站长QQ:2386932994 | 联系邮箱:2386932994@qq.com