当前位置: 代码网 > it编程>数据库>Mysql > Datax助力轻松迁移SQLServer数据至GreatSQL

Datax助力轻松迁移SQLServer数据至GreatSQL

2024年08月06日 Mysql 我要评论
GreatSQL是适用于金融级应用的国内自主开源数据库,具备高性能、高可靠、高易用性、高安全等多个核心特性,可以作为MySQL或Percona Server的可选替换,用于线上生产环境,且完全免费并兼容MySQL或Percona Server。主要是通过过滤条件过滤掉了全量迁移的数据,进而变相的完成了增量迁移。mcr.microsoft.com/mssql/server:2017-latest:镜像名称:标签。":设置连接密码,密码不能太短太简单,否则不满足sqlserver密码规范,容器会停止运行。

1.环境说明

1.1源端sqlsserver

版本ip端口
microsoft sql server 2017192.168.140.1601433

1.2目标端greatsql

版本ip端口
greatsql-8.0.32192.168.139.863308

2.安装环境

2.1安装sqlserver环境

环境说明:借助docker使用镜像启动数据库

2.1.1安装docker

1.安装基础软件包

$ yum install -y wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack yum-utils

2.配置 docker-ce 国内 yum 源(阿里云)

$ yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3.安装 docker 依赖包

$ yum install -y  device-mapper-persistent-data lvm2 

4.安装 docker-ce

$ yum install docker-ce -y 

5.启动容器

$ systemctl start docker && systemctl enable docker 
2.1.2 拉取镜像
$ docker pull mcr.microsoft.com/mssql/server:2017-latest 
2.1.3运行容器
$ docker run -e "accept_eula=y" -e "sa_password=********" \
-p 1433:1433 --name sqlserver2017 \
-d mcr.microsoft.com/mssql/server:2017-latest 

参数解释:

  • -e "accept_eula=y":默认选择同意协议许可证

  • -e "sa_password=****":设置连接密码,密码不能太短太简单,否则不满足sqlserver密码规范,容器会停止运行

  • -p 1433:1433: 宿主机端口映射到容器端口(前者为宿主机)

  • --name sqlserver2017:容器别名

  • -d: 后台运行

  • mcr.microsoft.com/mssql/server:2017-latest:镜像名称:标签

2.1.4使用数据库

1.进入容器

$ docker exec -it sqlserver2017 bash

2.连接数据库

$ /opt/mssql-tools/bin/sqlcmd -s localhost -u sa -p "********"

3.查询数据库

1> select name from sys.databases;
2> go

4.创建数据库

1> create database testdb;
2> go

5.创建表并插入数据

use testdb
create table t1(id int)
go
insert into t1 values(1),(2)
go

2.2安装greatsql环境

使用docker镜像进行安装,直接拉取greatsql镜像即可

$ docker pull greatsql/greatsql

并创建greatsql容器

$ docker run -d --name greatsql --hostname=greatsql -e mysql_allow_empty_password=1 greatsql/greatsql

2.3安装datax

datax安装需要依赖的环境

  • jdk(1.8以上,推荐1.8)

  • python(推荐python2.6.x及以上)

安装步骤,解压即用,但是解压完成不做其他操作运行job会报错

$ cd /soft

$ ll
total 3764708
-rw-r--r--  1 root root  853734462 dec  9 04:06 datax.tar.gz

$ tar xf datax.tar.gz

$ python /soft/datax/bin/datax.py /soft/datax/job/job.json 

datax (datax-opensource-3.0), from alibaba !
copyright (c) 2010-2017, alibaba group. all rights reserved.
2023-07-19 11:19:17.483 [main] warn  configparser - 插件[streamreader,streamwriter]加载失败,1s后重试... exception:code:[common-00], describe:[您提供的配置文件存在错误信息,请检查您的作业配置 .] - 配置信息错误,您提供的配置文件[/soft/datax/plugin/reader/._mysqlreader/plugin.json]不存在. 请检查您的配置文件. 
2023-07-19 11:19:18.488 [main] error engine - 
经datax智能分析,该任务最可能的错误原因是:
com.alibaba.datax.common.exception.dataxexception: code:[common-00], describe:[您提供的配置文件存在错误信息,请检查您的作业配置 .] - 配置信息错误,您提供的配置文件[/soft/datax/plugin/reader/._mysqlreader/plugin.json]不存在. 请检查您的配置文件.
at com.alibaba.datax.common.exception.dataxexception.asdataxexception(dataxexception.java:26)
at com.alibaba.datax.common.util.configuration.from(configuration.java:95)
at com.alibaba.datax.core.util.configparser.parseonepluginconfig(configparser.java:153)
at com.alibaba.datax.core.util.configparser.parsepluginconfig(configparser.java:125)
at com.alibaba.datax.core.util.configparser.parse(configparser.java:63)
at com.alibaba.datax.core.engine.entry(engine.java:137)
at com.alibaba.datax.core.engine.main(engine.java:204)

解决报错:删除plugin目录下以及plugin/reader和plugin/writer目录下的所有._开头的文件

需要删除三个目录下的隐藏文件

  • plugin/
  • plugin/reader/
  • plugin/writer/
$ rm -rf /opt/app/datax/plugin/._*
$ rm -rf /opt/app/datax/plugin/reader/._*
$ rm -rf /opt/app/datax/plugin/writer/._*

运行一个测例,检测datax是否安装成功

$ python /soft/datax/bin/datax.py  /soft/datax/job/job.json 

datax (datax-opensource-3.0), from alibaba !
copyright (c) 2010-2017, alibaba group. all rights reserved.
 2023-07-19 11:22:12.298 [main] info  vminfo - vminfo# operatingsystem class => sun.management.operatingsystemimpl
2023-07-19 11:22:12.305 [main] info  engine - the machine info  => 
osinfo: oracle corporation 1.8 25.251-b08
jvminfo: linux amd64 4.19.25-200.1.el7.bclinux.x86_64
cpu num: 48
totalphysicalmemory: -0.00g
freephysicalmemory: -0.00g
maxfiledescriptorcount: -1
currentopenfiledescriptorcount: -1
gc names [ps marksweep, ps scavenge]

memory_name                    | allocation_size                | init_size                      
ps eden space                  | 256.00mb                       | 256.00mb                       
code cache                     | 240.00mb                       | 2.44mb                         
compressed class space         | 1,024.00mb                     | 0.00mb                         
ps survivor space              | 42.50mb                        | 42.50mb                        
ps old gen                     | 683.00mb                       | 683.00mb                       
metaspace                      | -0.00mb                        | 0.00mb  

2023-07-19 11:22:12.320 [main] info  engine - 
{"content":[{"reader":{"name":"streamreader",
"parameter":{"column":[
{"type":"string","value":"datax"},
{"type":"long","value":19890604},
{"type":"date","value":"1989-06-04 00:00:00"},
{"type":"bool","value":true},
{"type":"bytes","value":"test"}
],"slicerecordcount":100000}
},"writer":{"name":"streamwriter","parameter":{"encoding":"utf-8","print":false}}}],
"setting":{"errorlimit":{"percentage":0.02,"record":0},
"speed":{"byte":10485760}}}

2023-07-19 11:22:12.336 [main] warn  engine - prioriy set to 0, because numberformatexception, the value is: null
2023-07-19 11:22:12.337 [main] info  perftrace - perftrace traceid=job_-1, isenable=false, priority=0
2023-07-19 11:22:12.338 [main] info  jobcontainer - datax jobcontainer starts job.
2023-07-19 11:22:12.339 [main] info  jobcontainer - set jobid = 0
2023-07-19 11:22:12.352 [job-0] info  jobcontainer - jobcontainer starts to do prepare ...
2023-07-19 11:22:12.352 [job-0] info  jobcontainer - datax reader.job [streamreader] do prepare work .
2023-07-19 11:22:12.352 [job-0] info  jobcontainer - datax writer.job [streamwriter] do prepare work .
2023-07-19 11:22:12.352 [job-0] info  jobcontainer - jobcontainer starts to do split ...
2023-07-19 11:22:12.353 [job-0] info  jobcontainer - job set max-byte-speed to 10485760 bytes.
2023-07-19 11:22:12.354 [job-0] info  jobcontainer - datax reader.job [streamreader] splits to [1] tasks.
2023-07-19 11:22:12.354 [job-0] info  jobcontainer - datax writer.job [streamwriter] splits to [1] tasks.
2023-07-19 11:22:12.371 [job-0] info  jobcontainer - jobcontainer starts to do schedule ...
2023-07-19 11:22:12.375 [job-0] info  jobcontainer - scheduler starts [1] taskgroups.
2023-07-19 11:22:12.376 [job-0] info  jobcontainer - running by standalone mode.
2023-07-19 11:22:12.384 [taskgroup-0] info  taskgroupcontainer - taskgroupid=[0] start [1] channels for [1] tasks.
2023-07-19 11:22:12.388 [taskgroup-0] info  channel - channel set byte_speed_limit to -1, no bps activated.
2023-07-19 11:22:12.388 [taskgroup-0] info  channel - channel set record_speed_limit to -1, no tps activated.
2023-07-19 11:22:12.396 [taskgroup-0] info  taskgroupcontainer - taskgroup[0] taskid[0] attemptcount[1] is started
2023-07-19 11:22:12.697 [taskgroup-0] info  taskgroupcontainer - taskgroup[0] taskid[0] is successed, used[302]ms
2023-07-19 11:22:12.698 [taskgroup-0] info  taskgroupcontainer - taskgroup[0] completed it's tasks.
2023-07-19 11:22:22.402 [job-0] info  standalonejobcontainercommunicator - total 100000 records, 2600000 bytes | speed 253.91kb/s, 10000 records/s | error 0 records, 0 bytes |  all task waitwritertime 0.020s |  all task waitreadertime 0.033s | percentage 100.00%
2023-07-19 11:22:22.402 [job-0] info  abstractscheduler - scheduler accomplished all tasks.
2023-07-19 11:22:22.402 [job-0] info  jobcontainer - datax writer.job [streamwriter] do post work.
2023-07-19 11:22:22.403 [job-0] info  jobcontainer - datax reader.job [streamreader] do post work.
2023-07-19 11:22:22.403 [job-0] info  jobcontainer - datax jobid [0] completed successfully.
2023-07-19 11:22:22.403 [job-0] info  hookinvoker - no hook invoked, because base dir not exists or is a file: /soft/datax/hook
2023-07-19 11:22:22.404 [job-0] info  jobcontainer - 
 [total cpu info] => 
averagecpu                     | maxdeltacpu                    | mindeltacpu                    
-1.00%                         | -1.00%                         | -1.00%
 [total gc info] => 
 name                 | totalgccount       | maxdeltagccount    | mindeltagccount    | totalgctime        | maxdeltagctime     | mindeltagctime     
 ps marksweep         | 0                  | 0                  | 0                  | 0.000s             | 0.000s             | 0.000s             
 ps scavenge          | 0                  | 0                  | 0                  | 0.000s             | 0.000s             | 0.000s             

2023-07-19 11:22:22.404 [job-0] info  jobcontainer - perftrace not enable!
2023-07-19 11:22:22.404 [job-0] info  standalonejobcontainercommunicator - total 100000 records, 2600000 bytes | speed 253.91kb/s, 10000 records/s | error 0 records, 0 bytes |  all task waitwritertime 0.020s |  all task waitreadertime 0.033s | percentage 100.00%
2023-07-19 11:22:22.406 [job-0] info  jobcontainer - 
任务启动时刻                    : 2023-07-19 11:22:12
任务结束时刻                    : 2023-07-19 11:22:22
任务总计耗时                    :                 10s
任务平均流量                    :          253.91kb/s
记录写入速度                    :          10000rec/s
读出记录总数                    :              100000
读写失败总数                    :                   0

3.sqlserver2greatsql全量迁移

3.1 源端(sqlserver)造测试数据

$ docker exec -it 47bd0ed79c26 /bin/bash

$ /opt/mssql-tools/bin/sqlcmd -s localhost -u sa -p "********"

1> create database testdb
1> use testdb
1> insert into t1 values(1),(2),(3);
2> go
1> select * from t1;
2> go
id         
\-----------
​          1
​          2
​          3

3.2 目标端(greatsql)创建表结构

greatsql> create database testdb;
greatsql> use testdb;
greatsql> create table t1 (id int primary key);

3.3 编写datax的job文件

$ cat /soft/datax/job/sqlserver_to_greatsql.json
{
​    "job": {
​        "content": [
​            {
​                "reader": {
​                    "name": "sqlserverreader",
​                    "parameter": {
​                        "connection": [
​                            {
​                                "jdbcurl": ["jdbc:sqlserver://127.0.0.1:1433;databasename=testdb"],
​                                "table": ["t1"]
​                            }
​                        ],
​                        "password": "********",
​                        "username": "sa",
​                        "column": ["*"]
​                    }
​                },
​                "writer": {
​                    "name": "mysqlwriter",
​                    "parameter": {
​                        "column": ["*"],
​                        "connection": [
​                            {
​                                "jdbcurl": "jdbc:mysql://10.17.139.86:3308/testdb",
​                                "table": ["t1"]
​                            }
​                       ],
​                        "password": "******",
​                        "session": [],
​                        "username": "admin",
​                        "writemode": "insert"
​                    }
​                }
​            }
​        ],
​        "setting": {
​            "speed": {
​                "channel": "5"
​            }
​        }
​    }
}

3.4 运行datax迁移任务

$ python /soft/datax/bin/datax.py /soft/datax/job/sqlserver_to_greatsql.json 

datax (datax-opensource-3.0), from alibaba !
copyright (c) 2010-2017, alibaba group. all rights reserved.

2023-11-28 09:58:44.087 [main] info  vminfo - vminfo# operatingsystem class => sun.management.operatingsystemimpl
2023-11-28 09:58:44.104 [main] info  engine - the machine info  => 
osinfo: oracle corporation 1.8 25.181-b13
jvminfo: linux amd64 3.10.0-957.el7.x86_64

cpu num: 8
totalphysicalmemory: -0.00g
freephysicalmemory: -0.00g
maxfiledescriptorcount: -1
currentopenfiledescriptorcount: -1
gc names [ps marksweep, ps scavenge]
memory_name                    | allocation_size                | init_size                      
ps eden space                  | 256.00mb                       | 256.00mb                       
code cache                     | 240.00mb                       | 2.44mb                         
compressed class space         | 1,024.00mb                     | 0.00mb                         
ps survivor space              | 42.50mb                        | 42.50mb                        
ps old gen                     | 683.00mb                       | 683.00mb                       
metaspace                      | -0.00mb                        | 0.00mb                         

2023-11-28 09:58:44.137 [main] info  engine - 
{
"content":[
{"reader":{
"name":"sqlserverreader",
"parameter":{
"column":["*"],
"connection":[
{"jdbcurl":["jdbc:sqlserver://127.0.0.1:1433;databasename=testdb"],
"table":["t1"]}],
"password":"*************",
"username":"sa"}},

"writer":{"name":"mysqlwriter","parameter":{"column":["*"],

"connection":[{"jdbcurl":"jdbc:mysql://10.17.139.86:3308/testdb",
"table":["t1"]}],
"password":"********",
"session":[],
"username":"admin",
"writemode":"insert"}}}],
"setting":{"speed":{"channel":"5"}}}

2023-11-28 09:58:44.176 [main] warn  engine - prioriy set to 0, because numberformatexception, the value is: null
2023-11-28 09:58:44.179 [main] info  perftrace - perftrace traceid=job_-1, isenable=false, priority=0
2023-11-28 09:58:44.180 [main] info  jobcontainer - datax jobcontainer starts job.
2023-11-28 09:58:44.183 [main] info  jobcontainer - set jobid = 0
2023-11-28 09:58:44.542 [job-0] info  originalconfpretreatmentutil - available jdbcurl:jdbc:sqlserver://127.0.0.1:1433;databasename=testdb.
2023-11-28 09:58:44.544 [job-0] warn  originalconfpretreatmentutil - 您的配置文件中的列配置存在一定的风险. 因为您未配置读取数据库表的列,当您的表字段个数、类型有变动时,可能影响任务正确性甚至会运行出错。请检查您的配置并作出修改.
loading class `com.mysql.jdbc.driver'. this is deprecated. the new driver class is `com.mysql.cj.jdbc.driver'. the driver is automatically registered via the spi and manual loading of the driver class is generally unnecessary.
2023-11-28 09:58:45.099 [job-0] info  originalconfpretreatmentutil - table:[t1] all columns:[id].
2023-11-28 09:58:45.099 [job-0] warn  originalconfpretreatmentutil - 您的配置文件中的列配置信息存在风险. 因为您配置的写入数据库表的列为*,当您的表字段个数、类型有变动时,可能影响任务正确性甚至会运行出错。请检查您的配置并作出修改.
2023-11-28 09:58:45.102 [job-0] info  originalconfpretreatmentutil - write data [
insert into %s (id) values(?)
], which jdbcurl like:[jdbc:mysql://10..17.139.86:16310/testdb?yearisdatetype=false&zerodatetimebehavior=converttonull&tinyint1isbit=false&rewritebatchedstatements=true]
2023-11-28 09:58:45.103 [job-0] info  jobcontainer - jobcontainer starts to do prepare ...
2023-11-28 09:58:45.103 [job-0] info  jobcontainer - datax reader.job [sqlserverreader] do prepare work 
2023-11-28 09:58:45.104 [job-0] info  jobcontainer - datax writer.job [mysqlwriter] do prepare work .
2023-11-28 09:58:45.104 [job-0] info  jobcontainer - jobcontainer starts to do split ...
2023-11-28 09:58:45.105 [job-0] info  jobcontainer - job set channel-number to 5 channels.
2023-11-28 09:58:45.112 [job-0] info  jobcontainer - datax reader.job [sqlserverreader] splits to [1] tasks.
2023-11-28 09:58:45.114 [job-0] info  jobcontainer - datax writer.job [mysqlwriter] splits to [1] tasks.
2023-11-28 09:58:45.135 [job-0] info  jobcontainer - jobcontainer starts to do schedule ...
2023-11-28 09:58:45.139 [job-0] info  jobcontainer - scheduler starts [1] taskgroups.
2023-11-28 09:58:45.142 [job-0] info  jobcontainer - running by standalone mode.
2023-11-28 09:58:45.151 [taskgroup-0] info  taskgroupcontainer - taskgroupid=[0] start [1] channels for [1] tasks.
2023-11-28 09:58:45.157 [taskgroup-0] info  channel - channel set byte_speed_limit to -1, no bps activated.
2023-11-28 09:58:45.158 [taskgroup-0] info  channel - channel set record_speed_limit to -1, no tps activated.
2023-11-28 09:58:45.173 [taskgroup-0] info  taskgroupcontainer - taskgroup[0] taskid[0] attemptcount[1] is started
2023-11-28 09:58:45.181 [0-0-0-reader] info  commonrdbmsreader$task - begin to read record by sql: [select * from t1 
] jdbcurl:[jdbc:sqlserver://127.0.0.1:1433;databasename=testdb].
2023-11-28 09:58:45.398 [0-0-0-reader] info  commonrdbmsreader$task - finished read record by sql: [select * from t1 
] jdbcurl:[jdbc:sqlserver://127.0.0.1:1433;databasename=testdb].
2023-11-28 09:58:45.454 [taskgroup-0] info  taskgroupcontainer - taskgroup[0] taskid[0] is successed, used[284]ms
2023-11-28 09:58:45.455 [taskgroup-0] info  taskgroupcontainer - taskgroup[0] completed it's tasks.
2023-11-28 09:58:55.175 [job-0] info  standalonejobcontainercommunicator - total 3 records, 3 bytes | speed 0b/s, 0 records/s | error 0 records, 0 bytes |  all task waitwritertime 0.000s |  all task waitreadertime 0.000s | percentage 100.00%
2023-11-28 09:58:55.175 [job-0] info  abstractscheduler - scheduler accomplished all tasks.
2023-11-28 09:58:55.175 [job-0] info  jobcontainer - datax writer.job [mysqlwriter] do post work.
2023-11-28 09:58:55.176 [job-0] info  jobcontainer - datax reader.job [sqlserverreader] do post work.
2023-11-28 09:58:55.176 [job-0] info  jobcontainer - datax jobid [0] completed successfully.
2023-11-28 09:58:55.176 [job-0] info  hookinvoker - no hook invoked, because base dir not exists or is a file: /soft/datax/hook
2023-11-28 09:58:55.177 [job-0] info  jobcontainer - 
 [total cpu info] => 
averagecpu                     | maxdeltacpu                    | mindeltacpu                    
-1.00%                         | -1.00%                         | -1.00%
 [total gc info] => 
 name                 | totalgccount       | maxdeltagccount    | mindeltagccount    | totalgctime        | maxdeltagctime     | mindeltagctime     
 ps marksweep         | 1                  | 1                  | 1                  | 0.061s             | 0.061s             | 0.061s             
 ps scavenge          | 1                  | 1                  | 1                  | 0.039s             | 0.039s             | 0.039s             
2023-11-28 09:58:55.177 [job-0] info  jobcontainer - perftrace not enable!
2023-11-28 09:58:55.177 [job-0] info  standalonejobcontainercommunicator - total 3 records, 3 bytes | speed 0b/s, 0 records/s | error 0 records, 0 bytes |  all task waitwritertime 0.000s |  all task waitreadertime 0.000s | percentage 100.00%
2023-11-28 09:58:55.179 [job-0] info  jobcontainer - 
任务启动时刻                    : 2023-11-28 09:58:44
任务结束时刻                    : 2023-11-28 09:58:55
任务总计耗时                    :                 10s
任务平均流量                    :                0b/s
记录写入速度                    :              0rec/s
读出记录总数                    :                   3
读写失败总数                    :                   0

3.5 到目标端验证数据

greatsql> select * from t1;
+----+
| id |
+----+
|  1 |
|  2 |
|  3 |
+----+
3 rows in set (0.01 sec)

4. sqlserver to greatsql增量迁移

4.1 源端(sqlserver)创建测试数据

2> create table t2 (id int,createtime datetime);
3> go
1> insert into t2 values(1,getdate());
2> g
(1 rows affected)
1> insert into t2 values(2,getdate());
2> go
(1 rows affected)
1> insert into t2 values(3,getdate());
2> go
(1 rows affected)
1> insert into t2 values(4,getdate());
2> go
(1 rows affected)
1> insert into t2 values(5,getdate());
2> go
(1 rows affected)
1> insert into t2 values(6,getdate());
2> go
(1 rows affected)
1> select * from t2;
2> go
id          createtime             
---------- -----------------------
​          1 2023-11-28 02:18:20.790
​          2 2023-11-28 02:18:27.040
​          3 2023-11-28 02:18:32.103
​          4 2023-11-28 02:18:37.690
​          5 2023-11-28 02:18:41.450
​          6 2023-11-28 02:18:46.330

4.2 编写datax的全量迁移job文件

$ cat sqlserver_to_greatsql_inc.json 
{
​    "job": {
​        "content": [
​            {
​                "reader": {
​                    "name": "sqlserverreader",
​                    "parameter": {
​                        "connection": [
​                            {
​                                "jdbcurl": ["jdbc:sqlserver://127.0.0.1:1433;databasename=testdb"],
​                                "table": ["t2"]
​                            }
​                        ],
​                        "password": "********",
​                        "username": "sa",
​                        "column": ["*"]
​                    }
​                },
​                "writer": {
​                    "name": "mysqlwriter",
​                    "parameter": {
​                        "column": ["*"],
​                        "connection": [
​                            {
​                                "jdbcurl": "jdbc:mysql://10.17.139.86:3308/testdb",
​                                "table": ["t2"]
​                            }
​                        ],
​                        "password": "!qaz2wsx",
​                        "session": [],
​                        "username": "admin",
​                        "writemode": "insert"
​                    }
​                }
​            }
​        ],
​        "setting": {
​            "speed": {
​                "channel": "5"
​            }
​        }
​    }
}

4.3 运行datax全量迁移任务

$ python /soft/datax/bin/datax.py /soft/datax/job/sqlserver_to_greatsql_inc.json 

 datax (datax-opensource-3.0), from alibaba !
copyright (c) 2010-2017, alibaba group. all rights reserved.
2023-11-28 10:19:59.279 [main] info  vminfo - vminfo# operatingsystem class => sun.management.operatingsystemimpl
2023-11-28 10:19:59.286 [main] info  engine - the machine info  => 
osinfo: oracle corporation 1.8 25.181-b13
jvminfo: linux amd64 3.10.0-957.el7.x86_64
cpu num: 8
totalphysicalmemory: -0.00g
freephysicalmemory: -0.00g
maxfiledescriptorcount: -1
currentopenfiledescriptorcount: -1
gc names [ps marksweep, ps scavenge]

memory_name                    | allocation_size                | init_size                      
ps eden space                  | 256.00mb                       | 256.00mb                       
code cache                     | 240.00mb                       | 2.44mb                         
compressed class space         | 1,024.00mb                     | 0.00mb                         
ps survivor space              | 42.50mb                        | 42.50mb                        
ps old gen                     | 683.00mb                       | 683.00mb                       
metaspace                      | -0.00mb                        | 0.00mb                         

2023-11-28 10:19:59.302 [main] info  engine - 
{"content":[{"reader":{"name":"sqlserverreader","parameter":{"column":[
"*"],"connection":[{"jdbcurl":["jdbc:sqlserver://127.0.0.1:1433;databasename=testdb"],
"table":["t2"]}],"password":"*************","username":"sa"}},
"writer":{"name":"mysqlwriter","parameter":{"column":["*"],
"connection":[{"jdbcurl":"jdbc:mysql://10..17.139.86:16310/testdb","table":["t2"]}],
"password":"********",
"session":[],
"username":"admin",
"writemode":"insert"}}}],
"setting":{"speed":{"channel":"5"}}}

2023-11-28 10:19:59.319 [main] warn  engine - prioriy set to 0, because numberformatexception, the value is: null
2023-11-28 10:19:59.321 [main] info  perftrace - perftrace traceid=job_-1, isenable=false, priority=0
2023-11-28 10:19:59.321 [main] info  jobcontainer - datax jobcontainer starts job.
2023-11-28 10:19:59.324 [main] info  jobcontainer - set jobid = 0
2023-11-28 10:19:59.629 [job-0] info  originalconfpretreatmentutil - available jdbcurl:jdbc:sqlserver://127.0.0.1:1433;databasename=testdb.
2023-11-28 10:19:59.630 [job-0] warn  originalconfpretreatmentutil - 您的配置文件中的列配置存在一定的风险. 因为您未配置读取数据库表的列,当您的表字段个数、类型有变动时,可能影响任务正确性甚至会运行出错。请检查您的配置并作出修改.
loading class `com.mysql.jdbc.driver'. this is deprecated. the new driver class is `com.mysql.cj.jdbc.driver'. the driver is automatically registered via the spi and manual loading of the driver class is generally unnecessary.
2023-11-28 10:20:00.027 [job-0] info  originalconfpretreatmentutil - table:[t2] all columns:[
id,createtime].
2023-11-28 10:20:00.027 [job-0] warn  originalconfpretreatmentutil - 您的配置文件中的列配置信息存在风险. 因为您配置的写入数据库表的列为*,当您的表字段个数、类型有变动时,可能影响任务正确性甚至会运行出错。请检查您的配置并作出修改.
2023-11-28 10:20:00.029 [job-0] info  originalconfpretreatmentutil - write data [
insert into %s (id,createtime) values(?,?)
], which jdbcurl like:[jdbc:mysql://10..17.139.86:16310/testdb?yearisdatetype=false&zerodatetimebehavior=converttonull&tinyint1isbit=false&rewritebatchedstatements=true]
2023-11-28 10:20:00.030 [job-0] info  jobcontainer - jobcontainer starts to do prepare ...
2023-11-28 10:20:00.031 [job-0] info  jobcontainer - datax reader.job [sqlserverreader] do prepare work .
2023-11-28 10:20:00.031 [job-0] info  jobcontainer - datax writer.job [mysqlwriter] do prepare work .
2023-11-28 10:20:00.032 [job-0] info  jobcontainer - jobcontainer starts to do split ...
2023-11-28 10:20:00.032 [job-0] info  jobcontainer - job set channel-number to 5 channels.
2023-11-28 10:20:00.037 [job-0] info  jobcontainer - datax reader.job [sqlserverreader] splits to [1] tasks.
2023-11-28 10:20:00.038 [job-0] info  jobcontainer - datax writer.job [mysqlwriter] splits to [1] tasks.
2023-11-28 10:20:00.060 [job-0] info  jobcontainer - jobcontainer starts to do schedule ...
2023-11-28 10:20:00.063 [job-0] info  jobcontainer - scheduler starts [1] taskgroups.
2023-11-28 10:20:00.066 [job-0] info  jobcontainer - running by standalone mode.
2023-11-28 10:20:00.073 [taskgroup-0] info  taskgroupcontainer - taskgroupid=[0] start [1] channels for [1] tasks.
2023-11-28 10:20:00.080 [taskgroup-0] info  channel - channel set byte_speed_limit to -1, no bps activated.
2023-11-28 10:20:00.080 [taskgroup-0] info  channel - channel set record_speed_limit to -1, no tps activated.
2023-11-28 10:20:00.093 [taskgroup-0] info  taskgroupcontainer - taskgroup[0] taskid[0] attemptcount[1] is started
2023-11-28 10:20:00.101 [0-0-0-reader] info  commonrdbmsreader$task - begin to read record by sql: [select * from t2 
] jdbcurl:[jdbc:sqlserver://127.0.0.1:1433;databasename=testdb].
2023-11-28 10:20:00.262 [0-0-0-reader] info  commonrdbmsreader$task - finished read record by sql: [select * from t2 
] jdbcurl:[jdbc:sqlserver://127.0.0.1:1433;databasename=testdb].
2023-11-28 10:20:00.334 [taskgroup-0] info  taskgroupcontainer - taskgroup[0] taskid[0] is successed, used[243]ms
2023-11-28 10:20:00.335 [taskgroup-0] info  taskgroupcontainer - taskgroup[0] completed it's tasks.
2023-11-28 10:20:10.087 [job-0] info  standalonejobcontainercommunicator - total 6 records, 54 bytes | speed 5b/s, 0 records/s | error 0 records, 0 bytes |  all task waitwritertime 0.000s |  all task waitreadertime 0.000s | percentage 100.00%
2023-11-28 10:20:10.088 [job-0] info  abstractscheduler - scheduler accomplished all tasks.
2023-11-28 10:20:10.088 [job-0] info  jobcontainer - datax writer.job [mysqlwriter] do post work.
2023-11-28 10:20:10.089 [job-0] info  jobcontainer - datax reader.job [sqlserverreader] do post work.
2023-11-28 10:20:10.090 [job-0] info  jobcontainer - datax jobid [0] completed successfully.
2023-11-28 10:20:10.091 [job-0] info  hookinvoker - no hook invoked, because base dir not exists or is a file: /soft/datax/hook
2023-11-28 10:20:10.094 [job-0] info  jobcontainer - 
 [total cpu info] => 
averagecpu                     | maxdeltacpu                    | mindeltacpu                    
-1.00%                         | -1.00%                         | -1.00%                      
 [total gc info] => 
 name                 | totalgccount       | maxdeltagccount    | mindeltagccount    | totalgctime        | maxdeltagctime     | mindeltagctime     
 ps marksweep         | 1                  | 1                  | 1                  | 0.034s             | 0.034s             | 0.034s             
 ps scavenge          | 1                  | 1                  | 1                  | 0.031s             | 0.031s             | 0.031s             

2023-11-28 10:20:10.094 [job-0] info  jobcontainer - perftrace not enable!
2023-11-28 10:20:10.095 [job-0] info  standalonejobcontainercommunicator - total 6 records, 54 bytes | speed 5b/s, 0 records/s | error 0 records, 0 bytes |  all task waitwritertime 0.000s |  all task waitreadertime 0.000s | percentage 100.00%
2023-11-28 10:20:10.097 [job-0] info  jobcontainer - 
任务启动时刻                    : 2023-11-28 10:19:59
任务结束时刻                    : 2023-11-28 10:20:10
任务总计耗时                    :                 10s
任务平均流量                    :                5b/s
记录写入速度                    :              0rec/s
读出记录总数                    :                   6
读写失败总数                    :                   0

4.4 验证全量迁移的数据

greatsql> select * from t2;
+----+---------------------+
| id | createtime          |
+----+---------------------+
|  1 | 2023-11-28 02:18:21 |
|  2 | 2023-11-28 02:18:27 |
|  3 | 2023-11-28 02:18:32 |
|  4 | 2023-11-28 02:18:38 |
|  5 | 2023-11-28 02:18:41 |
|  6 | 2023-11-28 02:18:46 |
+----+---------------------+ 

4.5 源端(sqlserver)插入增量数据

2> insert into t2 values(7,'202311-28 03:18:46.330');
3> go
changed database context to 'jem_db'.
(1 rows affected)
1> insert into t2 values(8,'2023-11-28 03:20:46.330');
2> go
(1 rows affected)
1> insert into t2 values(9,'2023-11-28 03:25:46.330');
2> go
(1 rows affected)
1> insert into t2 values(10,'2023-11-28 03:30:46.330');
2> go
(1 rows affected)
1> select * from t2;
2> go
id          createtime             
----------- -----------------------
​          1 2023-11-28 02:18:20.790
​          2 2023-11-28 02:18:27.040
​          3 2023-11-28 02:18:32.103
​          4 2023-11-28 02:18:37.690
​          5 2023-11-28 02:18:41.450
​          6 2023-11-28 02:18:46.330
​          7 2023-11-28 03:18:46.330
​          8 2023-11-28 03:20:46.330
​          9 2023-11-28 03:25:46.330
​         10 2023-11-28 03:30:46.330

4.6 编写datax增量迁移job文件

$ cat sqlserver_to_greatsql_inc.json 
{
​    "job": {
​        "content": [
​            {
​                "reader": {
​                    "name": "sqlserverreader",
​                    "parameter": {
​                        "connection": [
​                            {
​                                "jdbcurl": ["jdbc:sqlserver://127.0.0.1:1433;databasename=testdb"],
​                                "table": ["t2"]
​                            }
​                        ],
​                        "password": "********",
​                        "username": "sa",
​                        "column": ["*"],
​                        "where":"createtime > '${start_time}' and createtime < '${end_time}'"
​                    }
​                },
​                "writer": {
​                    "name": "mysqlwriter",
​                    "parameter": {
​                        "column": ["*"],
​                        "connection": [
​                            {
​                                "jdbcurl": "jdbc:mysql://10..17.139.86:16310/testdb",
​                                "table": ["t2"]
​                            }
​                        ],
​                        "password": "!qaz2wsx",
​                        "session": [],
​                        "username": "admin",
​                        "writemode": "insert"
​                    }
​                }
​            }
​        ],
​        "setting": {
​            "speed": {
​                "channel": "5"
​            }
​        }
​    }
}

4.7 运行datax增量迁移任务

$ python /soft/datax/bin/datax.py /soft/datax/job/sqlserver_to_mysql_inc.json -p "-dstart_time='2023-11-28 03:17:46.330' -dend_time='2023-11-28 03:31:46.330'"

datax (datax-opensource-3.0), from alibaba !
copyright (c) 2010-2017, alibaba group. all rights reserved.

2023-11-28 10:29:24.492 [main] info  vminfo - vminfo# operatingsystem class => sun.management.operatingsystemimpl

2023-11-28 10:29:24.504 [main] info  engine - the machine info  => 

osinfo: oracle corporation 1.8 25.181-b13
jvminfo: linux amd64 3.10.0-957.el7.x86_64
cpu num: 8

totalphysicalmemory: -0.00g
freephysicalmemory: -0.00g
maxfiledescriptorcount: -1
currentopenfiledescriptorcount: -1
gc names [ps marksweep, ps scavenge]
memory_name                    | allocation_size                | init_size                      
ps eden space                  | 256.00mb                       | 256.00mb                       
code cache                     | 240.00mb                       | 2.44mb                         
compressed class space         | 1,024.00mb                     | 0.00mb                         
ps survivor space              | 42.50mb                        | 42.50mb                        
ps old gen                     | 683.00mb                       | 683.00mb                       
metaspace                      | -0.00mb                        | 0.00mb                         

2023-11-28 10:29:24.524 [main] info  engine - 
{"content":[{"reader":{"name":"sqlserverreader","parameter":{"column":["*"],
"connection":[{"jdbcurl":["jdbc:sqlserver://127.0.0.1:1433;databasename=testdb"],
"table":["t2"]}],"password":"*************","username":"sa",
"where":"createtime > '2023-11-28 03:17:46.330' and createtime < '2023-11-28 03:31:46.330'"}},
"writer":{"name":"mysqlwriter","parameter":{"column":["*"],"connection":[{"jdbcurl":"jdbc:mysql://10..17.139.86:16310/testdb","table":["t2"]}],
"password":"********",
"session":[],
"username":"admin",
"writemode":"insert"}}}],
"setting":{"speed":{"channel":"5"}}}

2023-11-28 10:29:24.542 [main] warn  engine - prioriy set to 0, because numberformatexception, the value is: null
2023-11-28 10:29:24.544 [main] info  perftrace - perftrace traceid=job_-1, isenable=false, priority=0
2023-11-28 10:29:24.544 [main] info  jobcontainer - datax jobcontainer starts job.
2023-11-28 10:29:24.546 [main] info  jobcontainer - set jobid = 0
2023-11-28 10:29:24.830 [job-0] info  originalconfpretreatmentutil - available jdbcurl:jdbc:sqlserver://127.0.0.1:1433;databasename=testdb.
2023-11-28 10:29:24.831 [job-0] warn  originalconfpretreatmentutil - 您的配置文件中的列配置存在一定的风险. 因为您未配置读取数据库表的列,当您的表字段个数、类型有变动时,可能影响任务正确性甚至会运行出错。请检查您的配置并作出修改.
loading class `com.mysql.jdbc.driver'. this is deprecated. the new driver class is `com.mysql.cj.jdbc.driver'. the driver is automatically registered via the spi and manual loading of the driver class is generally unnecessary.
2023-11-28 10:29:25.113 [job-0] info  originalconfpretreatmentutil - table:[t2] all columns:[id,createtime].
2023-11-28 10:29:25.113 [job-0] warn  originalconfpretreatmentutil - 您的配置文件中的列配置信息存在风险. 因为您配置的写入数据库表的列为*,当您的表字段个数、类型有变动时,可能影响任务正确性甚至会运行出错。请检查您的配置并作出修改.
2023-11-28 10:29:25.115 [job-0] info  originalconfpretreatmentutil - write data [
insert into %s (id,createtime) values(?,?)
], which jdbcurl like:[jdbc:mysql://10..17.139.86:16310/testdb?yearisdatetype=false&zerodatetimebehavior=converttonull&tinyint1isbit=false&rewritebatchedstatements=true]
2023-11-28 10:29:25.116 [job-0] info  jobcontainer - jobcontainer starts to do prepare ...
2023-11-28 10:29:25.117 [job-0] info  jobcontainer - datax reader.job [sqlserverreader] do prepare work .
2023-11-28 10:29:25.117 [job-0] info  jobcontainer - datax writer.job [mysqlwriter] do prepare work .
2023-11-28 10:29:25.118 [job-0] info  jobcontainer - jobcontainer starts to do split ...
2023-11-28 10:29:25.118 [job-0] info  jobcontainer - job set channel-number to 5 channels.
2023-11-28 10:29:25.123 [job-0] info  jobcontainer - datax reader.job [sqlserverreader] splits to [1] tasks.
2023-11-28 10:29:25.124 [job-0] info  jobcontainer - datax writer.job [mysqlwriter] splits to [1] tasks.
2023-11-28 10:29:25.146 [job-0] info  jobcontainer - jobcontainer starts to do schedule ...
2023-11-28 10:29:25.150 [job-0] info  jobcontainer - scheduler starts [1] taskgroups.
2023-11-28 10:29:25.153 [job-0] info  jobcontainer - running by standalone mode.
2023-11-28 10:29:25.159 [taskgroup-0] info  taskgroupcontainer - taskgroupid=[0] start [1] channels for [1] tasks.
2023-11-28 10:29:25.165 [taskgroup-0] info  channel - channel set byte_speed_limit to -1, no bps activated.
2023-11-28 10:29:25.165 [taskgroup-0] info  channel - channel set record_speed_limit to -1, no tps activated.
2023-11-28 10:29:25.176 [taskgroup-0] info  taskgroupcontainer - taskgroup[0] taskid[0] attemptcount[1] is started
2023-11-28 10:29:25.183 [0-0-0-reader] info  commonrdbmsreader$task - begin to read record by sql: [select * from t2 where (createtime > '2023-11-28 03:17:46.330' and createtime < '2023-11-28 03:31:46.330')
] jdbcurl:[jdbc:sqlserver://127.0.0.1:1433;databasename=testdb].
2023-11-28 10:29:25.344 [0-0-0-reader] info  commonrdbmsreader$task - finished read record by sql: [select * from t2 where (createtime > '2023-11-28 03:17:46.330' and createtime < '2023-11-28 03:31:46.330')
] jdbcurl:[jdbc:sqlserver://127.0.0.1:1433;databasename=testdb].
2023-11-28 10:29:25.606 [taskgroup-0] info  taskgroupcontainer - taskgroup[0] taskid[0] is successed, used[431]ms
2023-11-28 10:29:25.607 [taskgroup-0] info  taskgroupcontainer - taskgroup[0] completed it's tasks.
2023-11-28 10:29:35.173 [job-0] info  standalonejobcontainercommunicator - total 4 records, 37 bytes | speed 3b/s, 0 records/s | error 0 records, 0 bytes |  all task waitwritertime 0.000s |  all task waitreadertime 0.000s | percentage 100.00%
2023-11-28 10:29:35.173 [job-0] info  abstractscheduler - scheduler accomplished all tasks.
2023-11-28 10:29:35.174 [job-0] info  jobcontainer - datax writer.job [mysqlwriter] do post work.
2023-11-28 10:29:35.175 [job-0] info  jobcontainer - datax reader.job [sqlserverreader] do post work.
2023-11-28 10:29:35.175 [job-0] info  jobcontainer - datax jobid [0] completed successfully.
2023-11-28 10:29:35.177 [job-0] info  hookinvoker - no hook invoked, because base dir not exists or is a file: /soft/datax/hook
2023-11-28 10:29:35.179 [job-0] info  jobcontainer - 

 [total cpu info] => 
averagecpu                     | maxdeltacpu                    | mindeltacpu                    
-1.00%                         | -1.00%                         | -1.00%

 [total gc info] => 
 name                 | totalgccount       | maxdeltagccount    | mindeltagccount    | totalgctime        | maxdeltagctime     | mindeltagctime     
 ps marksweep         | 1                  | 1                  | 1                  | 0.052s             | 0.052s             | 0.052s             
 ps scavenge          | 1                  | 1                  | 1                  | 0.024s             | 0.024s             | 0.024s             
2023-11-28 10:29:35.180 [job-0] info  jobcontainer - perftrace not enable!
2023-11-28 10:29:35.181 [job-0] info  standalonejobcontainercommunicator - total 4 records, 37 bytes | speed 3b/s, 0 records/s | error 0 records, 0 bytes |  all task waitwritertime 0.000s |  all task waitreadertime 0.000s | percentage 100.00%

2023-11-28 10:29:35.183 [job-0] info  jobcontainer - 
任务启动时刻                    : 2023-11-28 10:29:24
任务结束时刻                    : 2023-11-28 10:29:35
任务总计耗时                    :                 10s
任务平均流量                    :                3b/s
记录写入速度                    :              0rec/s
读出记录总数                    :                   4
读写失败总数                    :                   0

4.8 到目标端(greatsql)验证增量数据

greatsql> select * from t2;

+----+---------------------+
| id | createtime          |
+----+---------------------+
|  1 | 2023-11-28 02:18:21 |
|  2 | 2023-11-28 02:18:27 |
|  3 | 2023-11-28 02:18:32 |
|  4 | 2023-11-28 02:18:38 |
|  5 | 2023-11-28 02:18:41 |
|  6 | 2023-11-28 02:18:46 |
|  7 | 2023-11-28 03:18:46 |
|  8 | 2023-11-28 03:20:46 |
|  9 | 2023-11-28 03:25:46 |
| 10 | 2023-11-28 03:30:46 |
+----+---------------------+
10 rows in set (0.00 sec)

增量迁移总结:通过增加过滤条件达到增量迁移的目的。主要是通过过滤条件过滤掉了全量迁移的数据,进而变相的完成了增量迁移。


enjoy greatsql :)

关于 greatsql

greatsql是适用于金融级应用的国内自主开源数据库,具备高性能、高可靠、高易用性、高安全等多个核心特性,可以作为mysql或percona server的可选替换,用于线上生产环境,且完全免费并兼容mysql或percona server。

相关链接: greatsql社区 gitee github bilibili

greatsql社区:

image

社区有奖建议反馈: https://greatsql.cn/thread-54-1-1.html

社区博客有奖征稿详情: https://greatsql.cn/thread-100-1-1.html

(对文章有疑问或者有独到见解都可以去社区官网提出或分享哦~)

技术交流群:

微信&qq群:

qq群:533341697

微信群:添加greatsql社区助手(微信号:wanlidbc )好友,待社区助手拉您进群。

(0)

相关文章:

版权声明:本文内容由互联网用户贡献,该文观点仅代表作者本人。本站仅提供信息存储服务,不拥有所有权,不承担相关法律责任。 如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 2386932994@qq.com 举报,一经查实将立刻删除。

发表评论

验证码:
Copyright © 2017-2025  代码网 保留所有权利. 粤ICP备2024248653号
站长QQ:2386932994 | 联系邮箱:2386932994@qq.com