前言
在当今高并发、大流量的互联网应用中,单台服务器往往难以承受巨大的访问压力。负载均衡作为解决这一问题的关键技术,能够将请求分发到多台服务器上,实现系统的高可用性和高性能。
nginx作为业界领先的web服务器和反向代理服务器,其负载均衡功能强大且配置灵活,被广泛应用于各大互联网公司。本文将深入探讨nginx负载均衡的各种策略和配置方法,通过实际案例帮助你掌握负载均衡的核心技能。
一、负载均衡基础概念
1.1 什么是负载均衡
负载均衡(load balancing)是一种将网络请求分配到多个服务器的技术,主要目的是:
核心目标:
- 高可用性:避免单点故障,提高系统可靠性
- 高性能:分散请求压力,提升系统处理能力
- 可扩展性:支持水平扩展,便于系统扩容
- 灵活性:支持动态调整和灰度发布
工作原理:
客户端 → 负载均衡器 → 后端服务器集群
↓
服务器1
服务器2
服务器3
1.2 负载均衡类型
硬件负载均衡
- f5 big-ip:企业级硬件负载均衡器
- a10:高性能负载均衡设备
- citrix netscaler:应用交付控制器
优点:
- 性能强大,处理能力高
- 功能完善,支持复杂算法
- 稳定性好,可靠性高
缺点:
- 价格昂贵,成本高
- 配置复杂,需要专业维护
- 扩展性差,升级困难
软件负载均衡
- nginx:高性能web服务器和反向代理
- haproxy:高可用性负载均衡器
- lvs:linux虚拟服务器
- apache mod_proxy_balancer:apache模块
优点:
- 成本低廉,开源免费
- 配置灵活,易于扩展
- 社区活跃,支持丰富
缺点:
- 性能相对硬件较低
- 功能相对简单
- 需要自己维护高可用
1.3 nginx负载均衡优势
技术优势:
- 高性能:基于事件驱动架构,支持数万并发连接
- 高可用性:支持健康检查和故障转移
- 灵活性:支持多种负载均衡算法
- 扩展性:支持第三方模块扩展
功能优势:
- 七层负载均衡:支持http/https协议
- 四层负载均衡:支持tcp/udp协议
- ssl终止:支持ssl证书卸载
- 缓存功能:支持内容缓存
- 压缩功能:支持gzip压缩
二、nginx负载均衡基础配置
2.1 upstream模块详解
upstream基本语法
# =============================================
# upstream模块基础配置
# =============================================
# 定义后端服务器组
upstream backend_servers {
# 服务器地址和端口
server 192.168.1.10:8080;
server 192.168.1.11:8080;
server 192.168.1.12:8080;
# 负载均衡方法
# 默认为轮询(round-robin)
# least_conn; # 最少连接
# ip_hash; # ip哈希
# hash $request_uri; # 一致性哈希
}
server {
listen 80;
server_name lb.example.com;
location / {
# 代理到后端服务器组
proxy_pass http://backend_servers;
# 设置代理头信息
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
}
}
upstream参数详解
# =============================================
# upstream服务器参数详解
# =============================================
upstream backend_servers {
# 基本服务器配置
server 192.168.1.10:8080 weight=5 max_fails=3 fail_timeout=30s backup;
server 192.168.1.11:8080 weight=3 max_fails=3 fail_timeout=30s;
server 192.168.1.12:8080 weight=2 max_fails=3 fail_timeout=30s down;
# 服务器参数说明:
# weight: 权重,数值越大分配到的请求越多
# max_fails: 最大失败次数,超过则标记为不可用
# fail_timeout: 失败超时时间,单位秒
# backup: 备份服务器,主服务器都不可用时启用
# down: 标记服务器永久不可用
# max_conns: 最大连接数限制
# resolve: 动态解析域名
# service: 服务发现配置
# slow_start: 慢启动时间
# 负载均衡方法
least_conn;
# 连接保持配置
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
# 会话保持配置
sticky cookie srv_id expires=1h domain=.example.com path=/;
# 健康检查配置
health_check interval=10s fails=3 passes=2 uri=/health port=8080;
}
2.2 基础负载均衡配置
简单轮询负载均衡
# =============================================
# 简单轮询负载均衡配置
# =============================================
# 定义后端服务器组(轮询方式)
upstream backend_round_robin {
# 轮询方式(默认)
server 192.168.1.10:8080;
server 192.168.1.11:8080;
server 192.168.1.12:8080;
# 连接保持配置
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name round-robin.example.com;
access_log /var/log/nginx/round-robin.example.com.access.log main;
error_log /var/log/nginx/round-robin.example.com.error.log warn;
location / {
# 代理到后端服务器组
proxy_pass http://backend_round_robin;
# 设置代理头信息
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
# 连接设置
proxy_http_version 1.1;
proxy_set_header connection "";
# 超时设置
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 缓冲区设置
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
# 添加负载均衡信息到响应头
add_header x-upstream-addr $upstream_addr;
add_header x-upstream-response-time $upstream_response_time;
}
# =============================================
# 健康检查端点
# =============================================
location /health {
access_log off;
return 200 "healthy\n";
add_header content-type text/plain;
}
}
带权重轮询负载均衡
# =============================================
# 带权重轮询负载均衡配置
# =============================================
# 定义后端服务器组(加权轮询)
upstream backend_weighted {
# 权重分配,数值越大分配到的请求越多
server 192.168.1.10:8080 weight=5; # 50%的请求
server 192.168.1.11:8080 weight=3; # 30%的请求
server 192.168.1.12:8080 weight=2; # 20%的请求
# 健康检查设置
server 192.168.1.10:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.12:8080 max_fails=3 fail_timeout=30s;
# 连接保持
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name weighted.example.com;
access_log /var/log/nginx/weighted.example.com.access.log main;
error_log /var/log/nginx/weighted.example.com.error.log warn;
location / {
proxy_pass http://backend_weighted;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
# 添加权重信息到响应头
add_header x-upstream-addr $upstream_addr;
add_header x-upstream-weight "5:3:2";
add_header x-upstream-response-time $upstream_response_time;
}
# =============================================
# 负载均衡统计
# =============================================
location /lb_stats {
# 限制访问
allow 127.0.0.1;
allow 192.168.1.0/24;
deny all;
# 显示负载均衡统计信息
add_header content-type "application/json";
return 200 '{
"algorithm": "weighted_round_robin",
"servers": [
{"addr": "192.168.1.10:8080", "weight": 5, "status": "up"},
{"addr": "192.168.1.11:8080", "weight": 3, "status": "up"},
{"addr": "192.168.1.12:8080", "weight": 2, "status": "up"}
]
}';
}
}
三、负载均衡策略详解
3.1 轮询策略(round robin)
基础轮询配置
# =============================================
# 轮询策略配置
# =============================================
# 定义后端服务器组(轮询)
upstream backend_round_robin {
# 轮询方式(默认)
server 192.168.1.10:8080;
server 192.168.1.11:8080;
server 192.168.1.12:8080;
# 健康检查
server 192.168.1.10:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.12:8080 max_fails=3 fail_timeout=30s;
# 连接保持
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name rr.example.com;
location / {
proxy_pass http://backend_round_robin;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 添加轮询信息
add_header x-lb-algorithm "round_robin";
add_header x-upstream-addr $upstream_addr;
}
}
轮询策略特点
工作原理:
- 按照顺序将请求依次分配到后端服务器
- 第一个请求分配到服务器1,第二个到服务器2,以此类推
- 循环往复,实现均匀分配
适用场景:
- 后端服务器性能相近
- 请求处理时间相似
- 不需要会话保持
优缺点:
- ✅ 配置简单,易于理解
- ✅ 请求分配均匀
- ❌ 不考虑服务器性能差异
- ❌ 不考虑当前连接数
3.2 加权轮询策略(weighted round robin)
加权轮询配置
# =============================================
# 加权轮询策略配置
# =============================================
# 定义后端服务器组(加权轮询)
upstream backend_weighted {
# 权重分配
server 192.168.1.10:8080 weight=5; # 高性能服务器
server 192.168.1.11:8080 weight=3; # 中等性能服务器
server 192.168.1.12:8080 weight=2; # 低性能服务器
# 健康检查
server 192.168.1.10:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.12:8080 max_fails=3 fail_timeout=30s;
# 连接保持
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name wrr.example.com;
location / {
proxy_pass http://backend_weighted;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 添加权重信息
add_header x-lb-algorithm "weighted_round_robin";
add_header x-upstream-addr $upstream_addr;
add_header x-upstream-weight "5:3:2";
}
}
加权轮询策略特点
工作原理:
- 根据服务器权重分配请求
- 权重越高,分配到的请求越多
- 权重比例为5:3:2,则请求分配比例约为50%:30%:20%
适用场景:
- 后端服务器性能差异较大
- 服务器配置不同
- 需要根据性能分配负载
优缺点:
- ✅ 考虑服务器性能差异
- ✅ 灵活配置负载分配
- ✅ 优化资源利用率
- ❌ 需要手动调整权重
- ❌ 不考虑实时负载情况
3.3 ip哈希策略(ip hash)
ip哈希配置
# =============================================
# ip哈希策略配置
# =============================================
# 定义后端服务器组(ip哈希)
upstream backend_ip_hash {
# ip哈希方式,确保同一客户端请求始终转发到同一服务器
ip_hash;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
server 192.168.1.12:8080;
# 健康检查
server 192.168.1.10:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.12:8080 max_fails=3 fail_timeout=30s;
# 连接保持
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name iphash.example.com;
location / {
proxy_pass http://backend_ip_hash;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 添加哈希信息
add_header x-lb-algorithm "ip_hash";
add_header x-upstream-addr $upstream_addr;
add_header x-client-ip $remote_addr;
}
}
ip哈希策略特点
工作原理:
- 基于客户端ip地址计算哈希值
- 同一ip的请求始终分配到同一服务器
- 确保会话一致性
适用场景:
- 需要会话保持
- 使用本地会话存储
- 无分布式会话
优缺点:
- ✅ 会话保持,用户体验好
- ✅ 配置简单
- ❌ 负载分配不均匀
- ❌ 服务器故障时影响大
- ❌ 不支持动态添加服务器
3.4 最少连接策略(least connections)
最少连接配置
# =============================================
# 最少连接策略配置
# =============================================
# 定义后端服务器组(最少连接)
upstream backend_least_conn {
# 最少连接方式,将请求转发到连接数最少的服务器
least_conn;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
server 192.168.1.12:8080;
# 健康检查
server 192.168.1.10:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.12:8080 max_fails=3 fail_timeout=30s;
# 连接保持
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name leastconn.example.com;
location / {
proxy_pass http://backend_least_conn;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 添加连接数信息
add_header x-lb-algorithm "least_conn";
add_header x-upstream-addr $upstream_addr;
add_header x-upstream-connections $upstream_connections;
}
}
最少连接策略特点
工作原理:
- 实时监控各服务器的连接数
- 将新请求分配到连接数最少的服务器
- 动态调整负载分配
适用场景:
- 请求处理时间差异较大
- 需要动态负载均衡
- 长连接应用
优缺点:
- ✅ 动态负载均衡
- ✅ 响应时间更优
- ✅ 资源利用率高
- ❌ 需要实时监控连接数
- ❌ 配置相对复杂
3.5 一致性哈希策略(consistent hash)
一致性哈希配置
# =============================================
# 一致性哈希策略配置
# =============================================
# 定义后端服务器组(一致性哈希)
upstream backend_consistent_hash {
# 一致性哈希方式
hash $request_uri consistent;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
server 192.168.1.12:8080;
# 健康检查
server 192.168.1.10:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.12:8080 max_fails=3 fail_timeout=30s;
# 连接保持
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name consistent.example.com;
location / {
proxy_pass http://backend_consistent_hash;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 添加哈希信息
add_header x-lb-algorithm "consistent_hash";
add_header x-upstream-addr $upstream_addr;
add_header x-hash-key $request_uri;
}
}
一致性哈希策略特点
工作原理:
- 基于请求特征(如uri)计算哈希值
- 相同特征的请求分配到同一服务器
- 支持动态添加/删除服务器
适用场景:
- 缓存系统
- 分布式存储
- 需要数据一致性
优缺点:
- ✅ 支持动态扩容
- ✅ 数据一致性好
- ✅ 负载分配相对均匀
- ❌ 配置复杂
- ❌ 需要选择合适的哈希键
3.6 策略对比总结
| 策略 | 适用场景 | 优点 | 缺点 |
|---|---|---|---|
| 轮询 | 服务器性能相近 | 配置简单,分配均匀 | 不考虑性能差异 |
| 加权轮询 | 服务器性能差异大 | 考虑性能差异 | 需要手动调整权重 |
| ip哈希 | 需要会话保持 | 会话保持 | 负载不均匀 |
| 最少连接 | 请求处理时间差异大 | 动态负载均衡 | 配置复杂 |
| 一致性哈希 | 缓存系统 | 支持动态扩容 | 配置复杂 |
四、高级负载均衡配置
4.1 健康检查配置
被动健康检查
# =============================================
# 被动健康检查配置
# =============================================
upstream backend_health_check {
# 后端服务器配置
server 192.168.1.10:8080 weight=5 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 weight=3 max_fails=3 fail_timeout=30s;
server 192.168.1.12:8080 weight=2 max_fails=3 fail_timeout=30s backup;
# 负载均衡方法
least_conn;
# 连接保持
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name health.example.com;
location / {
proxy_pass http://backend_health_check;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 添加健康检查信息
add_header x-upstream-status $upstream_status;
add_header x-upstream-response-time $upstream_response_time;
add_header x-upstream-addr $upstream_addr;
}
# =============================================
# 健康检查端点
# =============================================
location /health {
# 限制访问
allow 127.0.0.1;
allow 192.168.1.0/24;
deny all;
# 返回健康状态
add_header content-type "application/json";
return 200 '{
"status": "healthy",
"upstream": "backend_health_check",
"servers": [
{"addr": "192.168.1.10:8080", "status": "up"},
{"addr": "192.168.1.11:8080", "status": "up"},
{"addr": "192.168.1.12:8080", "status": "backup"}
]
}';
}
}
主动健康检查(需要nginx_plus或第三方模块)
# =============================================
# 主动健康检查配置(需要nginx_plus)
# =============================================
upstream backend_active_health {
zone backend_active_health 64k;
server 192.168.1.10:8080 slow_start=30s;
server 192.168.1.11:8080 slow_start=30s;
server 192.168.1.12:8080 slow_start=30s backup;
# 主动健康检查
health_check interval=10s fails=3 passes=2 uri=/health port=8080;
# 负载均衡
least_conn;
# 连接保持
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name active-health.example.com;
location / {
proxy_pass http://backend_active_health;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 添加健康状态信息
add_header x-upstream-status $upstream_status;
add_header x-upstream-response-time $upstream_response_time;
add_header x-upstream-addr $upstream_addr;
}
# =============================================
# 健康状态监控
# =============================================
location /upstream_status {
# 限制访问
allow 127.0.0.1;
allow 192.168.1.0/24;
deny all;
# 显示上游服务器状态
upstream_status;
add_header content-type "text/plain";
}
}
4.2 会话保持配置
cookie会话保持
# =============================================
# cookie会话保持配置
# =============================================
upstream backend_sticky {
# cookie会话保持
sticky cookie srv_id expires=1h domain=.example.com path=/ httponly secure;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
server 192.168.1.12:8080;
# 健康检查
server 192.168.1.10:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.12:8080 max_fails=3 fail_timeout=30s;
# 连接保持
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name sticky.example.com;
location / {
proxy_pass http://backend_sticky;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 添加会话保持信息
add_header x-lb-session-sticky "cookie";
add_header x-upstream-addr $upstream_addr;
}
# =============================================
# 会话管理
# =============================================
location /session {
# 显示会话信息
add_header content-type "application/json";
return 200 '{
"session_id": "$cookie_srv_id",
"upstream_addr": "$upstream_addr",
"session_sticky": "enabled"
}';
}
}
路由会话保持
# =============================================
# 路由会话保持配置
# =============================================
upstream backend_route {
# 路由会话保持
sticky route $route_cookie $route_uri;
server 192.168.1.10:8080 route=a;
server 192.168.1.11:8080 route=b;
server 192.168.1.12:8080 route=c;
# 健康检查
server 192.168.1.10:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.12:8080 max_fails=3 fail_timeout=30s;
# 连接保持
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name route.example.com;
location / {
proxy_pass http://backend_route;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 添加路由信息
add_header x-lb-session-sticky "route";
add_header x-upstream-addr $upstream_addr;
add_header x-route $route_cookie;
}
}
4.3 动态负载均衡配置
基于dns的动态负载均衡
# =============================================
# 基于dns的动态负载均衡配置
# =============================================
upstream backend_dynamic {
# 启用dns解析
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# 动态服务器配置
server web1.example.com:8080 resolve;
server web2.example.com:8080 resolve;
server web3.example.com:8080 resolve;
# 健康检查
server web1.example.com:8080 max_fails=3 fail_timeout=30s;
server web2.example.com:8080 max_fails=3 fail_timeout=30s;
server web3.example.com:8080 max_fails=3 fail_timeout=30s;
# 负载均衡方法
least_conn;
# 连接保持
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name dynamic.example.com;
location / {
proxy_pass http://backend_dynamic;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 添加动态信息
add_header x-lb-dynamic "dns_based";
add_header x-upstream-addr $upstream_addr;
}
}
基于服务发现的动态负载均衡
# =============================================
# 基于服务发现的动态负载均衡配置
# =============================================
upstream backend_service_discovery {
# 服务发现配置
zone backend_service_discovery 64k;
# 动态服务器配置
server backend-service-1.example.com:8080 service=backend resolve;
server backend-service-2.example.com:8080 service=backend resolve;
server backend-service-3.example.com:8080 service=backend resolve;
# 健康检查
health_check interval=10s fails=3 passes=2 uri=/health port=8080;
# 负载均衡方法
least_conn;
# 连接保持
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name service-discovery.example.com;
location / {
proxy_pass http://backend_service_discovery;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 添加服务发现信息
add_header x-lb-service-discovery "enabled";
add_header x-upstream-addr $upstream_addr;
}
# =============================================
# 服务发现状态
# =============================================
location /service_status {
# 限制访问
allow 127.0.0.1;
allow 192.168.1.0/24;
deny all;
# 显示服务发现状态
add_header content-type "application/json";
return 200 '{
"service_discovery": "enabled",
"upstream": "backend_service_discovery",
"services": [
{"name": "backend-service-1", "status": "up"},
{"name": "backend-service-2", "status": "up"},
{"name": "backend-service-3", "status": "up"}
]
}';
}
}
4.4 灰度发布配置
基于权重的灰度发布
# =============================================
# 基于权重的灰度发布配置
# =============================================
upstream backend_canary {
# 灰度发布配置
# 旧版本:80%流量
# 新版本:20%流量
server 192.168.1.10:8080 weight=8; # 旧版本
server 192.168.1.20:8080 weight=2; # 新版本
# 健康检查
server 192.168.1.10:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.20:8080 max_fails=3 fail_timeout=30s;
# 连接保持
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name canary.example.com;
location / {
proxy_pass http://backend_canary;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 添加灰度发布信息
add_header x-canary-deployment "enabled";
add_header x-upstream-addr $upstream_addr;
add_header x-version-weight "8:2";
}
# =============================================
# 版本信息
# =============================================
location /version {
# 显示版本信息
add_header content-type "application/json";
return 200 '{
"canary_deployment": "enabled",
"old_version": {"weight": 8, "addr": "192.168.1.10:8080"},
"new_version": {"weight": 2, "addr": "192.168.1.20:8080"}
}';
}
}
基于用户特征的灰度发布
# =============================================
# 基于用户特征的灰度发布配置
# =============================================
# 定义用户特征映射
map $cookie_user_id $user_group {
default "old";
"~^user[0-9]{1,3}$" "new"; # 用户id为user001-user999的用户分配到新版本
}
# 定义后端服务器组
upstream backend_user_canary {
server 192.168.1.10:8080; # 旧版本
server 192.168.1.20:8080; # 新版本
}
server {
listen 80;
server_name user-canary.example.com;
location / {
# 根据用户组选择后端
if ($user_group = "new") {
proxy_pass http://192.168.1.20:8080;
}
if ($user_group = "old") {
proxy_pass http://192.168.1.10:8080;
}
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 添加用户灰度信息
add_header x-canary-user-based "enabled";
add_header x-user-group $user_group;
add_header x-upstream-addr $upstream_addr;
}
# =============================================
# 用户组信息
# =============================================
location /user_group {
# 显示用户组信息
add_header content-type "application/json";
return 200 '{
"user_canary": "enabled",
"user_id": "$cookie_user_id",
"user_group": "$user_group",
"upstream_addr": "$upstream_addr"
}';
}
}
五、负载均衡监控与故障处理
5.1 负载均衡监控配置
状态监控配置
# =============================================
# 负载均衡监控配置
# =============================================
http {
# =============================================
# 监控日志格式
# =============================================
# 负载均衡监控日志格式
log_format lb_monitor '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'upstream_addr="$upstream_addr" '
'upstream_status="$upstream_status" '
'upstream_response_time="$upstream_response_time" '
'upstream_connect_time="$upstream_connect_time" '
'upstream_header_time="$upstream_header_time"';
# =============================================
# 监控服务器配置
# =============================================
server {
listen 80;
server_name monitor.example.com;
# 负载均衡状态页面
location /lb_status {
# 限制访问
allow 127.0.0.1;
allow 192.168.1.0/24;
deny all;
# 显示负载均衡状态
stub_status on;
add_header content-type "text/plain";
}
# 上游服务器状态
location /upstream_status {
# 限制访问
allow 127.0.0.1;
allow 192.168.1.0/24;
deny all;
# 显示上游服务器状态
upstream_status;
add_header content-type "text/plain";
}
# 实时监控数据
location /real_time_stats {
# 限制访问
allow 127.0.0.1;
allow 192.168.1.0/24;
deny all;
# 实时统计信息
add_header content-type "application/json";
return 200 '{
"timestamp": "$time_iso8601",
"connections": {
"active": $connections_active,
"reading": $connections_reading,
"writing": $connections_writing,
"waiting": $connections_waiting
},
"requests": {
"total": $request_counter,
"current": $connections_active
}
}';
}
# 历史统计数据
location /historical_stats {
# 限制访问
allow 127.0.0.1;
allow 192.168.1.0/24;
deny all;
# 显示历史统计信息
add_header content-type "application/json";
return 200 '{
"historical_data": "available_in_log_files",
"log_file": "/var/log/nginx/lb_monitor.log",
"retention_days": 30
}';
}
}
}
监控脚本配置
# =============================================
# 负载均衡监控脚本
# 创建 /usr/local/nginx/scripts/lb_monitor.sh
# =============================================
#!/bin/bash
# nginx负载均衡监控脚本
# 用法:./lb_monitor.sh
# 配置参数
nginx_status_url="http://localhost/lb_status"
upstream_status_url="http://localhost/upstream_status"
log_file="/var/log/nginx/lb_monitor.log"
alert_email="admin@example.com"
alert_threshold=1000
# 获取nginx状态
get_nginx_status() {
curl -s $nginx_status_url
}
# 获取上游服务器状态
get_upstream_status() {
curl -s $upstream_status_url
}
# 解析nginx状态
parse_nginx_status() {
local status=$(get_nginx_status)
local active_connections=$(echo "$status" | grep "active connections" | awk '{print $3}')
local accepts=$(echo "$status" | awk 'nr==3 {print $1}')
local handled=$(echo "$status" | awk 'nr==3 {print $2}')
local requests=$(echo "$status" | awk 'nr==3 {print $3}')
local reading=$(echo "$status" | awk 'nr==4 {print $2}')
local writing=$(echo "$status" | awk 'nr==4 {print $4}')
local waiting=$(echo "$status" | awk 'nr==4 {print $6}')
echo "active connections: $active_connections"
echo "accepts: $accepts"
echo "handled: $handled"
echo "requests: $requests"
echo "reading: $reading"
echo "writing: $writing"
echo "waiting: $waiting"
# 检查是否超过阈值
if [ "$active_connections" -gt "$alert_threshold" ]; then
echo "warning: active connections exceed threshold: $active_connections > $alert_threshold"
send_alert "high active connections detected: $active_connections"
fi
}
# 解析上游服务器状态
parse_upstream_status() {
local status=$(get_upstream_status)
echo "upstream server status:"
echo "$status"
# 检查是否有服务器宕机
if echo "$status" | grep -q "down"; then
echo "warning: some upstream servers are down"
send_alert "upstream server down detected"
fi
}
# 发送告警
send_alert() {
local message=$1
echo "[$(date)] alert: $message" >> $log_file
echo "$message" | mail -s "nginx load balancer alert" $alert_email
}
# 记录监控数据
log_monitor_data() {
local timestamp=$(date "+%y-%m-%d %h:%m:%s")
local status=$(get_nginx_status)
local active_connections=$(echo "$status" | grep "active connections" | awk '{print $3}')
local upstream_status=$(get_upstream_status)
echo "$timestamp, $active_connections, $upstream_status" >> $log_file
}
# 主函数
main() {
echo "=== nginx load balancer monitor ==="
echo "timestamp: $(date)"
echo ""
echo "nginx status:"
parse_nginx_status
echo ""
echo "upstream status:"
parse_upstream_status
echo ""
echo "logging monitor data..."
log_monitor_data
echo "monitoring completed."
}
# 执行主函数
main
5.2 故障处理配置
故障转移配置
# =============================================
# 故障转移配置
# =============================================
upstream backend_failover {
# 主服务器
server 192.168.1.10:8080 weight=5 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 weight=3 max_fails=3 fail_timeout=30s;
# 备份服务器
server 192.168.1.20:8080 backup;
server 192.168.1.21:8080 backup;
# 负载均衡方法
least_conn;
# 连接保持
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name failover.example.com;
location / {
proxy_pass http://backend_failover;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 故障转移配置
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_next_upstream_tries 3;
proxy_next_upstream_timeout 10s;
# 添加故障转移信息
add_header x-failover "enabled";
add_header x-upstream-addr $upstream_addr;
add_header x-upstream-status $upstream_status;
}
# =============================================
# 故障状态页面
# =============================================
location /failover_status {
# 限制访问
allow 127.0.0.1;
allow 192.168.1.0/24;
deny all;
# 显示故障转移状态
add_header content-type "application/json";
return 200 '{
"failover": "enabled",
"primary_servers": [
{"addr": "192.168.1.10:8080", "status": "up"},
{"addr": "192.168.1.11:8080", "status": "up"}
],
"backup_servers": [
{"addr": "192.168.1.20:8080", "status": "standby"},
{"addr": "192.168.1.21:8080", "status": "standby"}
]
}';
}
}
熔断器配置
# =============================================
# 熔断器配置
# =============================================
# 定义熔断器状态变量
map $upstream_addr $circuit_breaker_status {
default "closed";
192.168.1.10:8080 "closed";
192.168.1.11:8080 "closed";
192.168.1.12:8080 "closed";
}
upstream backend_circuit_breaker {
server 192.168.1.10:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.12:8080 max_fails=3 fail_timeout=30s;
# 负载均衡方法
least_conn;
# 连接保持
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name circuit-breaker.example.com;
location / {
# 熔断器检查
if ($circuit_breaker_status = "open") {
return 503 "service unavailable - circuit breaker open";
}
proxy_pass http://backend_circuit_breaker;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 5s;
proxy_send_timeout 5s;
proxy_read_timeout 5s;
# 熔断器配置
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_next_upstream_tries 2;
proxy_next_upstream_timeout 5s;
# 添加熔断器信息
add_header x-circuit-breaker $circuit_breaker_status;
add_header x-upstream-addr $upstream_addr;
add_header x-upstream-status $upstream_status;
}
# =============================================
# 熔断器状态管理
# =============================================
location /circuit_breaker {
# 限制访问
allow 127.0.0.1;
allow 192.168.1.0/24;
deny all;
# 熔断器控制
if ($args ~* "action=open") {
return 200 'circuit breaker opened manually';
}
if ($args ~* "action=close") {
return 200 'circuit breaker closed manually';
}
# 显示熔断器状态
add_header content-type "application/json";
return 200 '{
"circuit_breaker": "enabled",
"status": "$circuit_breaker_status",
"upstream_addr": "$upstream_addr",
"upstream_status": "$upstream_status"
}';
}
}
六、实战案例
6.1 电商网站负载均衡配置
电商网站负载均衡架构
# =============================================
# 电商网站负载均衡配置
# =============================================
# 用户服务负载均衡
upstream user_service {
least_conn;
server 192.168.1.10:8080 weight=3 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 weight=3 max_fails=3 fail_timeout=30s;
server 192.168.1.12:8080 weight=2 max_fails=3 fail_timeout=30s backup;
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
# 商品服务负载均衡
upstream product_service {
ip_hash;
server 192.168.1.20:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.21:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.22:8080 max_fails=3 fail_timeout=30s backup;
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
# 订单服务负载均衡
upstream order_service {
least_conn;
server 192.168.1.30:8080 weight=4 max_fails=3 fail_timeout=30s;
server 192.168.1.31:8080 weight=4 max_fails=3 fail_timeout=30s;
server 192.168.1.32:8080 weight=2 max_fails=3 fail_timeout=30s backup;
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
# 支付服务负载均衡
upstream payment_service {
least_conn;
server 192.168.1.40:8080 weight=5 max_fails=2 fail_timeout=15s;
server 192.168.1.41:8080 weight=5 max_fails=2 fail_timeout=15s;
server 192.168.1.42:8080 backup;
keepalive 16;
keepalive_timeout 30s;
keepalive_requests 500;
}
server {
listen 80;
server_name ecommerce.example.com;
access_log /var/log/nginx/ecommerce.example.com.access.log main;
error_log /var/log/nginx/ecommerce.example.com.error.log warn;
# =============================================
# 静态资源
# =============================================
location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff|woff2|ttf|eot|svg)$ {
root /usr/local/nginx/html/ecommerce;
expires 7d;
add_header cache-control "public, no-transform";
access_log off;
}
# =============================================
# 用户服务
# =============================================
location /api/user/ {
proxy_pass http://user_service;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
# 用户服务会话保持
proxy_cookie_path / /;
proxy_cookie_domain off;
# 添加服务标识
add_header x-service "user_service";
add_header x-upstream-addr $upstream_addr;
}
# =============================================
# 商品服务
# =============================================
location /api/product/ {
proxy_pass http://product_service;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
# 商品服务缓存
proxy_cache product_cache;
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 1m;
proxy_cache_key $scheme$request_method$host$request_uri;
# 添加服务标识
add_header x-service "product_service";
add_header x-upstream-addr $upstream_addr;
add_header x-cache-status $upstream_cache_status;
}
# =============================================
# 订单服务
# =============================================
location /api/order/ {
proxy_pass http://order_service;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 订单服务重试机制
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_next_upstream_tries 3;
# 添加服务标识
add_header x-service "order_service";
add_header x-upstream-addr $upstream_addr;
}
# =============================================
# 支付服务
# =============================================
location /api/payment/ {
proxy_pass http://payment_service;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
# 支付服务严格超时
proxy_next_upstream error timeout;
proxy_next_upstream_tries 2;
# 添加服务标识
add_header x-service "payment_service";
add_header x-upstream-addr $upstream_addr;
}
# =============================================
# 健康检查
# =============================================
location /health {
access_log off;
return 200 "healthy\n";
add_header content-type text/plain;
}
# =============================================
# 服务状态
# =============================================
location /service_status {
# 限制访问
allow 127.0.0.1;
allow 192.168.1.0/24;
deny all;
# 显示服务状态
add_header content-type "application/json";
return 200 '{
"services": {
"user_service": {"status": "active", "algorithm": "least_conn"},
"product_service": {"status": "active", "algorithm": "ip_hash"},
"order_service": {"status": "active", "algorithm": "least_conn"},
"payment_service": {"status": "active", "algorithm": "least_conn"}
}
}';
}
}
6.2 微服务架构负载均衡配置
微服务负载均衡架构
# =============================================
# 微服务架构负载均衡配置
# =============================================
# api网关负载均衡
upstream api_gateway {
least_conn;
server 192.168.1.10:8080 weight=3 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 weight=3 max_fails=3 fail_timeout=30s;
server 192.168.1.12:8080 weight=2 max_fails=3 fail_timeout=30s backup;
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
# 认证服务负载均衡
upstream auth_service {
least_conn;
server 192.168.1.20:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.21:8080 max_fails=3 fail_timeout=30s;
keepalive 16;
keepalive_timeout 30s;
keepalive_requests 500;
}
# 用户服务负载均衡
upstream user_service {
ip_hash;
server 192.168.1.30:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.31:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.32:8080 max_fails=3 fail_timeout=30s backup;
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
# 业务服务负载均衡
upstream business_service {
least_conn;
server 192.168.1.40:8080 weight=2 max_fails=3 fail_timeout=30s;
server 192.168.1.41:8080 weight=2 max_fails=3 fail_timeout=30s;
server 192.168.1.42:8080 weight=1 max_fails=3 fail_timeout=30s;
server 192.168.1.43:8080 weight=1 max_fails=3 fail_timeout=30s backup;
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
# 消息服务负载均衡
upstream message_service {
least_conn;
server 192.168.1.50:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.51:8080 max_fails=3 fail_timeout=30s;
keepalive 16;
keepalive_timeout 30s;
keepalive_requests 500;
}
# 文件服务负载均衡
upstream file_service {
least_conn;
server 192.168.1.60:8080 weight=3 max_fails=3 fail_timeout=30s;
server 192.168.1.61:8080 weight=3 max_fails=3 fail_timeout=30s;
server 192.168.1.62:8080 weight=2 max_fails=3 fail_timeout=30s backup;
keepalive 32;
keepalive_timeout 30s;
keepalive_requests 1000;
}
server {
listen 80;
server_name microservice.example.com;
access_log /var/log/nginx/microservice.example.com.access.log main;
error_log /var/log/nginx/microservice.example.com.error.log warn;
# =============================================
# api网关
# =============================================
location / {
proxy_pass http://api_gateway;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
# api网关重试机制
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_next_upstream_tries 3;
# 添加网关标识
add_header x-gateway "nginx";
add_header x-upstream-addr $upstream_addr;
}
# =============================================
# 内部服务路由
# =============================================
# 认证服务
location /internal/auth/ {
proxy_pass http://auth_service;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
# 认证服务缓存
proxy_cache auth_cache;
proxy_cache_valid 200 302 5m;
proxy_cache_valid 404 1m;
proxy_cache_key $scheme$request_method$host$request_uri;
add_header x-service "auth_service";
add_header x-upstream-addr $upstream_addr;
add_header x-cache-status $upstream_cache_status;
}
# 用户服务
location /internal/user/ {
proxy_pass http://user_service;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
add_header x-service "user_service";
add_header x-upstream-addr $upstream_addr;
}
# 业务服务
location /internal/business/ {
proxy_pass http://business_service;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
# 业务服务重试机制
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_next_upstream_tries 3;
add_header x-service "business_service";
add_header x-upstream-addr $upstream_addr;
}
# 消息服务
location /internal/message/ {
proxy_pass http://message_service;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
add_header x-service "message_service";
add_header x-upstream-addr $upstream_addr;
}
# 文件服务
location /internal/file/ {
proxy_pass http://file_service;
proxy_set_header host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header x-forwarded-proto $scheme;
proxy_http_version 1.1;
proxy_set_header connection "";
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 文件服务大文件支持
client_max_body_size 100m;
proxy_max_temp_file_size 1024m;
add_header x-service "file_service";
add_header x-upstream-addr $upstream_addr;
}
# =============================================
# 服务发现和健康检查
# =============================================
location /service_discovery {
# 限制访问
allow 127.0.0.1;
allow 192.168.1.0/24;
deny all;
# 显示服务发现状态
add_header content-type "application/json";
return 200 '{
"microservices": {
"api_gateway": {"status": "active", "servers": 3},
"auth_service": {"status": "active", "servers": 2},
"user_service": {"status": "active", "servers": 3},
"business_service": {"status": "active", "servers": 4},
"message_service": {"status": "active", "servers": 2},
"file_service": {"status": "active", "servers": 3}
}
}';
}
# =============================================
# 健康检查
# =============================================
location /health {
access_log off;
return 200 "healthy\n";
add_header content-type text/plain;
}
}
性能优化建议:
- 合理配置连接保持参数
- 启用缓存提高响应速度
- 根据服务器性能配置权重
- 实现动态扩容和缩容
- 定期监控和优化负载均衡效果
高可用性建议:
- 配置备份服务器
- 实现故障自动转移
- 建立熔断机制防止雪崩
- 实现灰度发布降低风险
- 定期演练故障恢复流程
nginx负载均衡是构建高可用、高性能系统的关键技术。通过本文的学习,你应该能够根据实际业务需求,设计并实现合适的负载均衡方案,为系统的稳定运行提供有力保障。
以上就是nginx中多种负载均衡策略配置的实战指南的详细内容,更多关于nginx负载均衡的资料请关注代码网其它相关文章!
发表评论