如何开启并支持长连接
当使用nginx作为反向代理时,为了支持长连接,需要做到两点:
- 从client到nginx的连接是长连接
- 从nginx到server的连接是长连接
从http协议的角度看,nginx在这个过程中,对于客户端它扮演着http服务器端的角色。而对于真正的服务器端(在nginx的术语中称为upstream)nginx又扮演着http客户端的角色。
客户端(client)和 nginx 保持长连接,两个要求:
- client发送的http请求要求keep alive
- nginx设置上支持keep alive
一、客户端 与 nginx 长连接配置
默认情况下,nginx已经自动开启了对client连接的keep alive支持。一般场景可以直接使用,但是对于一些比较特殊的场景,还是有必要调整个别参数。
keepalive_timeout 指令
keepalive_timeout指令的语法:
syntax: keepalive_timeout timeout [header_timeout]; default: keepalive_timeout 75s; context: http, server, location
第一个参数设置keep-alive客户端连接在服务器端保持开启的超时值。值为0会禁用keep-alive客户端连接。可选的第二个参数在响应的header域中设置一个值“keep-alive: timeout=time”。这两个参数可以不一样。
注:默认75s一般情况下也够用,对于一些请求比较大的内部服务器通讯的场景,适当加大为120s或者300s。第二个参数通常可以不用设置。
keepalive_requests 指令
keepalive_requests指令用于设置一个keep-alive连接上可以服务的请求的最大数量。当最大请求数量达到时,连接被关闭。默认是100。
这个参数的真实含义,是指一个keep alive建立之后,nginx就会为这个连接设置一个计数器,记录这个keep alive的长连接上已经接收并处理的客户端请求的数量。如果达到这个参数设置的最大值时,则nginx会强行关闭这个长连接,逼迫客户端不得不重新建立新的长连接。
这个参数往往被大多数人忽略,因为大多数情况下当qps(每秒请求数)不是很高时,默认值100凑合够用。但是,对于一些qps比较高(比如超过10000qps,甚至达到30000,50000甚至更高) 的场景,默认的100就显得太低。
简单计算一下,qps=10000时,客户端每秒发送10000个请求(通常建立有多个长连接),每个连接只能最多跑100次请求,意味着平均每秒钟就会有100个长连接因此被nginx关闭。同样意味着为了保持qps,客户端不得不每秒中重新新建100个连接。因此,如果用netstat命令看客户端机器,就会发现有大量的time_wait的socket连接(即使此时keep alive已经在client和nginx之间生效)。
因此对于qps较高的场景,非常有必要加大这个参数,以避免出现大量连接被生成再抛弃的情况,减少time_wait。
keepalive_requests 指令
默认值1000,单个连接中处理的最大请求数,超过这个数,连接销毁。
keepalive_disable 指令
不对某些浏览器建立长连接,默认:msie6
send_timeout 指令
两次向客户端写操作之间的间隔 如果大于这个时间则关闭连接 默认60s
此处有坑,注意耗时的同步操作有可能会丢弃用户连接
该设置表示nginx服务器与客户端连接后,某次会话中服务器等待客户端响应超过10s,就会自动关闭连接。
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65 65; #超过这个时间 没有活动,会让keepalive失效
keepalive_time 1h; # 一个tcp连接总时长,超过之后 强制失效
send_timeout 60;# 默认60s 此处有坑!! 系统中 若有耗时操作,超过 send_timeout 强制断开连接。 注意:准备过程中,不是传输过程
keepalive_requests 1000; #一个tcp复用中 可以并发接收的请求个数二、nginx 与 服务端长连接配置
2.1 在 upstream 中配置
keepalive 100;
向上游服务器的保留连接数(线程池的概念)
keepalive_timeout 65
连接保留时间(单位秒)
keepalive_requests 10000
一个tcp复用中 可以并发接收的请求个数
2.2 在 server 中配置
proxy_http_version 1.1; 配置http版本号 默认使用http1.0协议,需要在request中增加”connection: keep-alive“ header才能够支持,而http1.1默认支持。 proxy_set_header connection ""; 清楚close信息
三、压力测试
3.1 【客户端】 直连 【nginx】
server software: nginx/1.21.6
server hostname: 192.168.44.102
server port: 80
document path: /
document length: 16 bytes
concurrency level: 30
time taken for tests: 13.035 seconds
complete requests: 100000
failed requests: 0
write errors: 0
total transferred: 25700000 bytes
html transferred: 1600000 bytes
requests per second: 7671.48 [#/sec] (mean)
time per request: 3.911 [ms] (mean)
time per request: 0.130 [ms] (mean, across all concurrent requests)
transfer rate: 1925.36 [kbytes/sec] received
connection times (ms)
min mean[+/-sd] median max
connect: 0 0 0.4 0 12
processing: 1 3 1.0 3 14
waiting: 0 3 0.9 3 14
total: 2 4 0.9 4 14
percentage of the requests served within a certain time (ms)
50% 4
66% 4
75% 4
80% 4
90% 5
95% 5
98% 6
99% 7
100% 14 (longest request)3.2 【客户端】 连接 【nginx】 反向代理 【nginx】
server software: nginx/1.21.6
server hostname: 192.168.44.101
server port: 80
document path: /
document length: 16 bytes
concurrency level: 30
time taken for tests: 25.968 seconds
complete requests: 100000
failed requests: 0
write errors: 0
total transferred: 25700000 bytes
html transferred: 1600000 bytes
requests per second: 3850.85 [#/sec] (mean)
time per request: 7.790 [ms] (mean)
time per request: 0.260 [ms] (mean, across all concurrent requests)
transfer rate: 966.47 [kbytes/sec] received
connection times (ms)
min mean[+/-sd] median max
connect: 0 0 0.2 0 13
processing: 3 8 1.4 7 22
waiting: 1 7 1.4 7 22
total: 3 8 1.4 7 22
percentage of the requests served within a certain time (ms)
50% 7
66% 8
75% 8
80% 8
90% 9
95% 10
98% 12
99% 13
100% 22 (longest request)
3.3【客户端】 直连 【tomcat】
server software:
server hostname: 192.168.44.105
server port: 8080
document path: /
document length: 7834 bytes
concurrency level: 30
time taken for tests: 31.033 seconds
complete requests: 100000
failed requests: 0
write errors: 0
total transferred: 804300000 bytes
html transferred: 783400000 bytes
requests per second: 3222.38 [#/sec] (mean)
time per request: 9.310 [ms] (mean)
time per request: 0.310 [ms] (mean, across all concurrent requests)
transfer rate: 25310.16 [kbytes/sec] received
connection times (ms)
min mean[+/-sd] median max
connect: 0 0 0.3 0 15
processing: 0 9 7.8 7 209
waiting: 0 9 7.2 7 209
total: 0 9 7.8 7 209
percentage of the requests served within a certain time (ms)
50% 7
66% 9
75% 11
80% 13
90% 18
95% 22
98% 27
99% 36
100% 209 (longest request)3.4 【客户端】 连接 【nginx】 反向代理 【tomcat】并开启keepalive
server software: nginx/1.21.6
server hostname: 192.168.44.101
server port: 80
document path: /
document length: 7834 bytes
concurrency level: 30
time taken for tests: 23.379 seconds
complete requests: 100000
failed requests: 0
write errors: 0
total transferred: 806500000 bytes
html transferred: 783400000 bytes
requests per second: 4277.41 [#/sec] (mean)
time per request: 7.014 [ms] (mean)
time per request: 0.234 [ms] (mean, across all concurrent requests)
transfer rate: 33688.77 [kbytes/sec] received
connection times (ms)
min mean[+/-sd] median max
connect: 0 0 0.2 0 9
processing: 1 7 4.2 6 143
waiting: 1 7 4.2 6 143
total: 1 7 4.2 6 143
percentage of the requests served within a certain time (ms)
50% 6
66% 7
75% 7
80% 7
90% 8
95% 10
98% 13
99% 16
100% 143 (longest request)3.5 【客户端】 连接 【nginx】 反向代理 【tomcat】不开启keepalive
server software: nginx/1.21.6
server hostname: 192.168.44.101
server port: 80
document path: /
document length: 7834 bytes
concurrency level: 30
time taken for tests: 33.814 seconds
complete requests: 100000
failed requests: 0
write errors: 0
total transferred: 806500000 bytes
html transferred: 783400000 bytes
requests per second: 2957.32 [#/sec] (mean)
time per request: 10.144 [ms] (mean)
time per request: 0.338 [ms] (mean, across all concurrent requests)
transfer rate: 23291.74 [kbytes/sec] received
connection times (ms)
min mean[+/-sd] median max
connect: 0 0 0.2 0 9
processing: 1 10 5.5 9 229
waiting: 1 10 5.5 9 229
total: 1 10 5.5 9 229
percentage of the requests served within a certain time (ms)
50% 9
66% 10
75% 11
80% 11
90% 13
95% 14
98% 17
99% 19
100% 229 (longest request)到此这篇关于nginx 长连接keep_alive的具体使用的文章就介绍到这了,更多相关nginx 长连接keep_alive内容请搜索代码网以前的文章或继续浏览下面的相关文章希望大家以后多多支持代码网!
发表评论