当前位置: 代码网 > it编程>编程语言>Java > Nacos的单机模式启动失败问题及解决

Nacos的单机模式启动失败问题及解决

2024年07月29日 Java 我要评论
背景项目上使用nacos做服务注册与发现,在开发环境中我们使用的容器化的naocs部署,并且使用的单机节点以及内置数据库derby,但是最近在部署nacos的服务的使用发现启动失败,报错日志如下(日志

背景

项目上使用nacos做服务注册与发现,在开发环境中我们使用的容器化的naocs部署,并且使用的单机节点以及内置数据库derby,但是最近在部署nacos的服务的使用发现启动失败,报错日志如下(日志很长,已经截取了一些):

2021-09-18 22:40:31,822 info tomcat initialized with port(s): 8848 (http)

2021-09-18 22:40:32,148 info root webapplicationcontext: initialization completed in 11864 ms

2021-09-18 22:40:44,722 error error starting tomcat context. exception: org.springframework.beans.factory.beancreationexception. message: error creating bean with name 'authfilterregistration' defined in class path resource [com/alibaba/nacos/core/auth/authconfigs.class]: bean instantiation via factory method failed; nested exception is org.springframework.beans.beaninstantiationexception: failed to instantiate [org.springframework.boot.web.servlet.filterregistrationbean]: factory method 'authfilterregistration' threw exception; nested exception is org.springframework.beans.factory.unsatisfieddependencyexception: error creating bean with name 'authfilter': unsatisfied dependency expressed through field 'authmanager'; nested exception is org.springframework.beans.factory.unsatisfieddependencyexception: error creating bean with name 'nacosauthmanager': unsatisfied dependency expressed through field 'authenticationmanager'; nested exception is org.springframework.beans.factory.unsatisfieddependencyexception: error creating bean with name 'nacosauthconfig': unsatisfied dependency expressed through field 'userdetailsservice'; nested exception is org.springframework.beans.factory.unsatisfieddependencyexception: error creating bean with name 'nacosuserdetailsserviceimpl': unsatisfied dependency expressed through field 'userpersistservice'; nested exception is org.springframework.beans.factory.unsatisfieddependencyexception: error creating bean with name 'embeddeduserpersistserviceimpl': unsatisfied dependency expressed through field 'databaseoperate'; nested exception is org.springframework.beans.factory.beancreationexception: error creating bean with name 'standalonedatabaseoperateimpl': invocation of init method failed; nested exception is java.lang.runtimeexception: com.alibaba.nacos.api.exception.runtime.nacosruntimeexception: errcode: 500, errmsg: load schema.sql error.

2021-09-18 22:40:44,827 warn exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.applicationcontextexception: unable to start web server; nested exception is org.springframework.boot.web.server.webserverexception: unable to start embedded tomcat

2021-09-18 22:40:44,837 info nacos log files: /home/nacos/logs

2021-09-18 22:40:44,849 info nacos log files: /home/nacos/conf

2021-09-18 22:40:44,853 info nacos log files: /home/nacos/data

2021-09-18 22:40:44,856 error startup errors : {}

org.springframework.context.applicationcontextexception: unable to start web server; nested exception is org.springframework.boot.web.server.webserverexception: unable to start embedded tomcat
        at org.springframework.boot.web.servlet.context.servletwebserverapplicationcontext.onrefresh(servletwebserverapplicationcontext.java:157)
        at org.springframework.context.support.abstractapplicationcontext.refresh(abstractapplicationcontext.java:540)
...............
caused by: com.alibaba.nacos.api.exception.runtime.nacosruntimeexception: errcode: 500, errmsg: load schema.sql error.
        at com.alibaba.nacos.config.server.service.datasource.localdatasourceserviceimpl.reload(localdatasourceserviceimpl.java:101)
        at com.alibaba.nacos.config.server.service.datasource.localdatasourceserviceimpl.initialize(localdatasourceserviceimpl.java:170)
        at com.alibaba.nacos.config.server.service.datasource.localdatasourceserviceimpl.init(localdatasourceserviceimpl.java:83)
        at com.alibaba.nacos.config.server.service.datasource.dynamicdatasource.getdatasource(dynamicdatasource.java:47)
        ... 166 common frames omitted
caused by: java.sql.sqltimeoutexception: login timeout exceeded.
        at org.apache.derby.impl.jdbc.sqlexceptionfactory.getsqlexception(unknown source)
        at org.apache.derby.impl.jdbc.sqlexceptionfactory.getsqlexception(unknown source)
        at org.apache.derby.impl.jdbc.util.generatecssqlexception(unknown source)
        at org.apache.derby.impl.jdbc.util.generatecssqlexception(unknown source)
        at org.apache.derby.jdbc.internaldriver.timelogin(unknown source)
        at org.apache.derby.jdbc.internaldriver.connect(unknown source)
        at org.apache.derby.jdbc.internaldriver.connect(unknown source)
        at org.apache.derby.jdbc.embeddeddriver.connect(unknown source)
        at com.zaxxer.hikari.util.driverdatasource.getconnection(driverdatasource.java:138)
        at com.zaxxer.hikari.pool.poolbase.newconnection(poolbase.java:354)
        at com.zaxxer.hikari.pool.poolbase.newpoolentry(poolbase.java:202)
        at com.zaxxer.hikari.pool.hikaripool.createpoolentry(hikaripool.java:473)
        at com.zaxxer.hikari.pool.hikaripool.checkfailfast(hikaripool.java:554)
        at com.zaxxer.hikari.pool.hikaripool.<init>(hikaripool.java:115)
        at com.zaxxer.hikari.hikaridatasource.getconnection(hikaridatasource.java:112)
        at com.alibaba.nacos.config.server.service.datasource.localdatasourceserviceimpl.reload(localdatasourceserviceimpl.java:96)
        ... 169 common frames omitted
caused by: org.apache.derby.iapi.error.standardexception: login timeout exceeded.
        at org.apache.derby.iapi.error.standardexception.newexception(unknown source)
        at org.apache.derby.impl.jdbc.sqlexceptionfactory.wrapargsfortransportacrossdrda(unknown source)
        ... 185 common frames omitted

根据错误日志分析

nacos在加载schema.sql文件时,derby数据库报了一个login timeout exceeded的错误,看着像是超时,此时根据异常堆栈查看源码如下:

private embedconnection timelogin(string var1, properties var2, int var3) throws sqlexception {
    try {
        internaldriver.logincallable var4 = new internaldriver.logincallable(this, var1, var2);
        future var5 = _executorpool.submit(var4);
        long var6 = system.currenttimemillis();
        long var8 = var6 + (long)var3 * 1000l;

        while(var6 < var8) {
            try {
                embedconnection var10 = (embedconnection)var5.get(var8 - var6, timeunit.milliseconds);
                return var10;
            } catch (interruptedexception var16) {
                interruptstatus.setinterrupted();
                var6 = system.currenttimemillis();
            } catch (executionexception var17) {
                throw this.processexception(var17);
            } catch (timeoutexception var18) {
                throw util.generatecssqlexception("xbda0.c.1", new object[0]);
            }
        }

        throw util.generatecssqlexception("xbda0.c.1", new object[0]);
    } finally {
        interruptstatus.restoreintrflagifseen();
    }
}

代码逻辑很简单,封装了获取embedconnection这个数据库链接的过程为异步获取,通过future来控制超时时长,那么此时可以判断出就是在获取embedconnection这个数据库链接的时候超时了。

但是获取embedconnection为什么超时,里面到底做了什么,不得而知(由于源码的class文件中缺乏了本地变量表,导致代码阅读困难),此时注意到:

2021-09-18 22:40:31,822 info tomcat initialized with port(s): 8848 (http)

2021-09-18 22:40:44,722 error。。。。

这两行日志为springboot的启动日志,而这两行日志中,日志打印的时间差了10s左右,那么猜测可能就是这10s中获取数据库连接超时,并且一定会有一个线程一直阻塞中。

此时我们想到可以使用 jstack命令去查看java进程的线程堆栈日志,如下:

"derby.rawstoredaemon" #38 daemon prio=5 os_prio=0 tid=0x00007ff090057800 nid=0x6337 in object.wait() [0x00007ff097ffe000]
   java.lang.thread.state: timed_waiting (on object monitor)
        at java.lang.object.wait(native method)
        - waiting on <0x00000000ee89fd70> (a org.apache.derby.impl.services.daemon.basicdaemon)
        at org.apache.derby.impl.services.daemon.basicdaemon.rest(unknown source)
        - locked <0x00000000ee89fd70> (a org.apache.derby.impl.services.daemon.basicdaemon)
        at org.apache.derby.impl.services.daemon.basicdaemon.run(unknown source)
        at java.lang.thread.run(thread.java:748)

"thread-15" #37 daemon prio=5 os_prio=0 tid=0x00007ff12928b800 nid=0x6336 runnable [0x00007ff0ec1be000]
   java.lang.thread.state: runnable
        at java.io.filedescriptor.sync(native method)
        at org.apache.derby.impl.io.dirrandomaccessfile.sync(unknown source)
        at org.apache.derby.impl.store.raw.data.rafcontainer.writerafheader(unknown source)
        at org.apache.derby.impl.store.raw.data.rafcontainer.clean(unknown source)
        - locked <0x00000000ef759b28> (a org.apache.derby.impl.store.raw.data.rafcontainer4)
        at org.apache.derby.impl.services.cache.concurrentcache.cleanandunkeepentry(unknown source)
        at org.apache.derby.impl.services.cache.concurrentcache.cleancache(unknown source)
        at org.apache.derby.impl.services.cache.concurrentcache.cleanall(unknown source)
        at org.apache.derby.impl.store.raw.data.basedatafilefactory.checkpoint(unknown source)
        at org.apache.derby.impl.store.raw.data.basedatafilefactory.createfinished(unknown source)
        at org.apache.derby.impl.store.raw.rawstore.createfinished(unknown source)
        at org.apache.derby.impl.store.access.ramaccessmanager.createfinished(unknown source)
        at org.apache.derby.impl.db.basicdatabase.createfinished(unknown source)
        at org.apache.derby.impl.db.basicdatabase.boot(unknown source)
        at org.apache.derby.impl.services.monitor.basemonitor.boot(unknown source)
        at org.apache.derby.impl.services.monitor.topservice.bootmodule(unknown source)
        at org.apache.derby.impl.services.monitor.basemonitor.bootservice(unknown source)
        at org.apache.derby.impl.services.monitor.basemonitor.createpersistentservice(unknown source)
        at org.apache.derby.impl.services.monitor.filemonitor.createpersistentservice(unknown source)
        at org.apache.derby.iapi.services.monitor.monitor.createpersistentservice(unknown source)
        at org.apache.derby.impl.jdbc.embedconnection$5.run(unknown source)
        at java.security.accesscontroller.doprivileged(native method)
        at org.apache.derby.impl.jdbc.embedconnection.createpersistentservice(unknown source)
        at org.apache.derby.impl.jdbc.embedconnection.createdatabase(unknown source)
        at org.apache.derby.impl.jdbc.embedconnection.<init>(unknown source)
        at org.apache.derby.jdbc.internaldriver$1.run(unknown source)
        at org.apache.derby.jdbc.internaldriver$1.run(unknown source)
        at java.security.accesscontroller.doprivileged(native method)
        at org.apache.derby.jdbc.internaldriver.getnewembedconnection(unknown source)
        at org.apache.derby.jdbc.internaldriver$logincallable.call(unknown source)
        at org.apache.derby.jdbc.internaldriver$logincallable.call(unknown source)
        at java.util.concurrent.futuretask.run(futuretask.java:266)
        at java.util.concurrent.threadpoolexecutor.runworker(threadpoolexecutor.java:1149)
        at java.util.concurrent.threadpoolexecutor$worker.run(threadpoolexecutor.java:624)
        at java.lang.thread.run(thread.java:748)
.......

"main" #1 prio=5 os_prio=0 tid=0x00007ff12804c000 nid=0x61d1 waiting on condition [0x00007ff130ee5000]
   java.lang.thread.state: timed_waiting (parking)
        at sun.misc.unsafe.park(native method)
        - parking to wait for  <0x00000000c01f6de8> (a java.util.concurrent.locks.abstractqueuedsynchronizer$conditionobject)
        at java.util.concurrent.locks.locksupport.parknanos(locksupport.java:215)
        at java.util.concurrent.locks.abstractqueuedsynchronizer$conditionobject.awaitnanos(abstractqueuedsynchronizer.java:2078)
        at java.util.concurrent.threadpoolexecutor.awaittermination(threadpoolexecutor.java:1475)
        at com.alibaba.nacos.common.utils.threadutils.shutdownthreadpool(threadutils.java:121)
        at com.alibaba.nacos.common.utils.threadutils.shutdownthreadpool(threadutils.java:106)
        at com.alibaba.nacos.common.executor.threadpoolmanager.destroy(threadpoolmanager.java:156)
        - locked <0x00000000c03b65d8> (a java.lang.object)
        at com.alibaba.nacos.common.executor.threadpoolmanager.shutdown(threadpoolmanager.java:197)
        at com.alibaba.nacos.core.code.startingspringapplicationrunlistener.failed(startingspringapplicationrunlistener.java:147)
        at org.springframework.boot.springapplicationrunlisteners.callfailedlistener(springapplicationrunlisteners.java:91)
        at org.springframework.boot.springapplicationrunlisteners.failed(springapplicationrunlisteners.java:84)
        at org.springframework.boot.springapplication.handlerunfailure(springapplication.java:828)
        at org.springframework.boot.springapplication.run(springapplication.java:327)
        at org.springframework.boot.springapplication.run(springapplication.java:1260)
        at org.springframework.boot.springapplication.run(springapplication.java:1248)
        at com.alibaba.nacos.nacos.main(nacos.java:35)
        at sun.reflect.nativemethodaccessorimpl.invoke0(native method)
        at sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:62)
        at sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)
        at java.lang.reflect.method.invoke(method.java:498)
        at org.springframework.boot.loader.mainmethodrunner.run(mainmethodrunner.java:49)
        at org.springframework.boot.loader.launcher.launch(launcher.java:108)
        at org.springframework.boot.loader.launcher.launch(launcher.java:58)
        at org.springframework.boot.loader.propertieslauncher.main(propertieslauncher.java:467)

"vm thread" os_prio=0 tid=0x00007ff12819d000 nid=0x61df runnable

"gc task thread#0 (parallelgc)" os_prio=0 tid=0x00007ff12805e800 nid=0x61d8 runnable

"gc task thread#1 (parallelgc)" os_prio=0 tid=0x00007ff128060800 nid=0x61d9 runnable

"gc task thread#2 (parallelgc)" os_prio=0 tid=0x00007ff128062800 nid=0x61da runnable

"gc task thread#3 (parallelgc)" os_prio=0 tid=0x00007ff128064000 nid=0x61db runnable

"vm periodic task thread" os_prio=0 tid=0x00007ff1281f0000 nid=0x61e7 waiting on condition

jni global references: 991

经过多次的jstack命令查看,发现thread-15这个线程一直阻塞在

 java.io.filedescriptor.sync(native method)

查看下jdk中对此方法的说明:

  • 在涉及io的操作中,linux系统为了平衡物理介质和内存的速度差异,引入了pagecache的概念,在我们调用outputstram的write的方法时,并不是将内容写入到物理介质中而是写入到pagecache中并且标记此块内存为dirty。
  • linux的后台会启动一个线程定时将标记为dirty的内存块刷新到物理介质中,而如果此时虚机断电的话,那么没有刷新到物理介质中的内容就有可能会丢失,所以linxu会提供fsync的命令强制刷新,而不是等定时任务刷新。
  • 而对应jdk中封装就是filedescriptor.sync的方法

这个问题我的第一反应就是会不会是pagecache过大,导致刷盘过慢通过free -g命令查看

free -g
              total        used        free      shared  buff/cache   available
mem:              8           1           0           0           6           6
swap:             9           0           9

发现总共内存为8g,而pagecache占用了高达6g,立马将pagecache强制刷盘,并重新启动nacos,可惜问题依旧存在。

此时我的想法是会不会是在执行某一个fsync的命令的时候,执行时间过久呢,我希望能够知道每个fsync的执行耗时,修改启动命令为:

#nohup java  ${java_opt} >${base_dir}/logs/start.out 2>&1 </dev/null
strace -t -ttt -ff -xx -yy -o strace.log java  ${java_opt} >${base_dir}/logs/start.out 2>&1 </dev/null

增加了strace命令去追踪每个系统调用的耗时,并且以线程id的形式分成不同的文件,strace.log后面跟着的数字就是线程id。

[root@localhost nacos]# ls -l
total 77832
drwxr-xr-x 2 root root     4096 sep 18 22:37 bin
drwxr-xr-x 2 root root     4096 sep 18 01:52 conf
drwxr-xr-x 3 root root     4096 sep 18 22:40 data
-rw-r--r-- 1 root root      696 sep 18 22:40 derby.log
drwxr-xr-x 2 root root     4096 sep 18 22:40 logs
-rw-r--r-- 1 root root    44298 sep 18 22:40 strace.log.27777
-rw-r--r-- 1 root root 58640394 sep 18 22:40 strace.log.27778
-rw-r--r-- 1 root root    16262 sep 18 22:40 strace.log.27779
-rw-r--r-- 1 root root    16882 sep 18 22:40 strace.log.25398
......
drwxr-xr-x 2 root root     4096 sep 18 22:38 target
drwxr-xr-x 3 root root     4096 sep 18 22:40 work

此时我们回到之前打印的线程堆栈日志,找到阻塞的线程

"thread-15" #37 daemon prio=5 os_prio=0 tid=0x00007ff12928b800 nid=0x6336 runnable [0x00007ff0ec1be000]

线程id: nid=0x6336,通过进制换算将16进制换算成10进制为25398,找到strace.log.25398,下载并通过notepad++ 打开,搜索fsync命令:

// 时间戳
1631906136.913442 fsync(44<\x2f\x68\x6f\x6d\x65\x2f\x6e\x61\x63\x6f\x73\x2f\x64\x61\x74\x61\x2f\x64\x65\x72\x62\x79\x2d\x64\x61\x74\x61\x2f\x64\x62\x2e\x6c\x63\x6b>) = 0 
// fsync命令耗时
<0.184688>

找到所有的fsync命令,通过打印的时间戳计算总耗时大约为11s左右。

此时只需要确认前面获取链接的超时时间,确认默认的超时时长为多少,查看nacos的代码,找到初始化数据库连接的地方,通过代码跟踪找到:

private synchronized void initialize(string jdbcurl) {
    hikaridatasource ds = new hikaridatasource();
    ds.setdriverclassname(jdbcdrivername);
    ds.setjdbcurl(jdbcurl);
    ds.setusername(username);
    ds.setpassword(password);
    ds.setidletimeout(30_000l);
    ds.setmaximumpoolsize(80);
    ds.setconnectiontimeout(10000l);
    datasourcetransactionmanager tm = new datasourcetransactionmanager();
    tm.setdatasource(ds);
    if (jdbctemplateinit) {
        jt.setdatasource(ds);
        tjt.settransactionmanager(tm);
    } else {
        jt = new jdbctemplate();
        jt.setmaxrows(50000);
        jt.setquerytimeout(5000);
        jt.setdatasource(ds);
        tjt = new transactiontemplate(tm);
        tjt.settimeout(5000);
        jdbctemplateinit = true;
    }
    reload();
}

ds.setconnectiontimeout(10000l);这里就是设置数据库连接超时的地方,代码中写死为10s,和strace.log中计算的时长大致能对的上,所以至此可以确认,是调用linux的fsync的方法耗时过久导致业务侧超时,但是为什么会耗时这么久?毕竟阿里给的默认10s应该是经过验证的。带着这个疑问我寻思应该去查看磁盘的性能了。

iostat -x 1 1000

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.49    0.00    2.00   22.94    0.00   71.57

device:         rrqm/s   wrqm/s     r/s     w/s    rkb/s    wkb/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00    43.00    0.00   52.00     0.00   488.00    14.92     0.96   17.56    0.00   17.56  18.42  95.80
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-2              0.00     0.00    0.00   87.00     0.00   488.00     8.92     0.99   10.92    0.00   10.92  11.01  95.80

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.77    0.00    1.26   23.37    0.00   71.61

device:         rrqm/s   wrqm/s     r/s     w/s    rkb/s    wkb/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00    47.00    0.00   59.00     0.00   548.00    18.58     0.99   16.47    0.00   16.47  16.24  95.80
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-2              0.00     0.00    0.00   98.00     0.00   552.00    11.27     1.03   10.40    0.00   10.40   9.79  95.90

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.99    0.00    1.75   22.94    0.00   71.32

device:         rrqm/s   wrqm/s     r/s     w/s    rkb/s    wkb/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00    42.00    0.00   55.00     0.00   464.00    13.24     0.98   19.11    0.00   19.11  17.22  94.70
dm-0              0.00     0.00    0.00    2.00     0.00    36.00    36.00     0.03   15.50    0.00   15.50   8.00   1.60
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-2              0.00     0.00    0.00   86.00     0.00   424.00     7.53     0.98   12.30    0.00   12.30  11.01  94.70

观测磁盘使用率,发现在nacos的启动过程中, %util指标(磁盘的使用率)一直在95%左右,磁盘在被高度占用中,磁盘的写入速率wkb/s平均在500kb/s左右。

至此可以确认磁盘的性能不足导致derby数据库初始化的时候,刷盘耗时过久,超过了默认配置的10s的超时时长,导致nacos启动失败。

修改方法很简单,修改默认超时时长或者将默认的超时时长变更为配置项放在配置文件中。

总结

以上为个人经验,希望能给大家一个参考,也希望大家多多支持代码网。

(0)

相关文章:

版权声明:本文内容由互联网用户贡献,该文观点仅代表作者本人。本站仅提供信息存储服务,不拥有所有权,不承担相关法律责任。 如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 2386932994@qq.com 举报,一经查实将立刻删除。

发表评论

验证码:
Copyright © 2017-2025  代码网 保留所有权利. 粤ICP备2024248653号
站长QQ:2386932994 | 联系邮箱:2386932994@qq.com