当前位置: 代码网 > 服务器>网站运营>运维 > Kubeadm方式部署K8s高可用集群

Kubeadm方式部署K8s高可用集群

2024年08月06日 运维 我要评论
一个6节点的Kubernetes(K8s)高可用集群是一套由6台服务器组成的系统,旨在通过冗余和故障转移机制确保集群的稳定性和可靠性。它通常包括多个主节点(Master Nodes),这些节点运行着Kubernetes控制平面组件,如API服务器、调度器和控制器管理器,以及多个工作节点(Worker Nodes),它们负责运行容器化的应用。在高可用配置中,主节点会部署多个实例,并通过选举机制确保始终有一个主节点处于活跃状态,其他主节点则作为备份。工作节点则负责执行容器任务,并通过负载均衡和故障转移策略来保证

1.1 kubernetes高可用集群部署

1.1.1 集群架构

1. 高可用拓扑

可以设置 ha 集群:

  • 使用堆叠(stacked)控制平面节点,其中 etcd 节点与控制平面节点共存;

  • 使用外部 etcd 节点,其中 etcd 在与控制平面不同的节点上运行;

  • 在设置 ha 集群之前,应该仔细考虑每种拓扑的优缺点。

2. 堆叠(stacked) etcd 拓扑

主要特点:

  • etcd 分布式数据存储集群堆叠在 kubeadm 管理的控制平面节点上,作为控制平面的一个组件运行。

  • 每个控制平面节点运行 kube-apiserver,kube-scheduler 和 kube-controller-manager 实例。

  • kube-apiserver 使用 lb 暴露给工作节点。

  • 每个控制平面节点创建一个本地 etcd 成员(member),这个 etcd 成员只与该节点的 kube-apiserver 通信。这同样适用于本地 kube-controller-manager 和 kube-scheduler 实例。

简单概况:每个 master 节点上运行一个 apiserver 和 etcd, etcd 只与本节点 apiserver 通信。

这种拓扑将控制平面和 etcd 成员耦合在同一节点上。相对使用外部 etcd 集群,设置起来更简单,而且更易于副本管理。

然而堆叠集群存在耦合失败的风险。如果一个节点发生故障,则 etcd 成员和控制平面实例都将丢失,并且冗余会受到影响。可以通过添加更多控制平面节点来降低此风险。应该为 ha 集群运行至少三个堆叠的控制平面节点(防止脑裂)。

这是 kubeadm 中的默认拓扑。当使用 kubeadm init 和 kubeadm join --control-plane 时,在控制平面节点上会自动创建本地 etcd 成员。

3. 外部 etcd 拓扑

主要特点:

  • 具有外部 etcd 的 ha 集群是一种这样的拓扑,其中 etcd 分布式数据存储集群在独立于控制平面节点的其他节点上运行。

  • 就像堆叠的 etcd 拓扑一样,外部 etcd 拓扑中的每个控制平面节点都运行 kube-apiserver,kube-scheduler 和 kube-controller-manager 实例。

  • 同样 kube-apiserver 使用负载均衡器暴露给工作节点。但是,etcd 成员在不同的主机上运行,每个 etcd 主机与每个控制平面节点的 kube-apiserver 通信。

简单概况: etcd 集群运行在单独的主机上,每个 etcd 都与 apiserver 节点通信。

  • 这种拓扑结构解耦了控制平面和 etcd 成员。因此,它提供了一种 ha 设置,其中失去控制平面实例或者 etcd 成员的影响较小,并且不会像堆叠的 ha 拓扑那样影响集群冗余。

  • 但是,此拓扑需要两倍于堆叠 ha 拓扑的主机数量。具有此拓扑的 ha 集群至少需要三个用于控制平面节点的主机和三个用于 etcd 节点的主机。需要单独设置外部 etcd 集群。

1.1.2 基础环境部署

  • kubernetes版本:1.28.2

主机ip地址操作系统配置
k8s-master-01192.168.110.21centos linux release 7.9.20094颗cpu 8g内存 100g硬盘
k8s-master-02192.168.110.22centos linux release 7.9.20094颗cpu 8g内存 100g硬盘
k8s-master-03192.168.110.23centos linux release 7.9.20094颗cpu 8g内存 100g硬盘
k8s-node-01192.168.110.24centos linux release 7.9.20094颗cpu 8g内存 100g硬盘
k8s-node-02192.168.110.25centos linux release 7.9.20094颗cpu 8g内存 100g硬盘
k8s-node-03192.168.110.26centos linux release 7.9.20094颗cpu 8g内存 100g硬盘
  • 关闭防火墙和selinux

[root@k8s-all ~]# systemctl disable --now firewalld.service
[root@k8s-all ~]# sed -ri 's/selinux=enforcing/selinux=disabled/' /etc/selinux/config
[root@k8s-all ~]# setenforce 0
  • 所有节点配置hosts解析

[root@k8s-all ~]# cat >> /etc/hosts << eof
> 192.168.110.21 k8s-master-01
> 192.168.110.22 k8s-master-02
> 192.168.110.23 k8s-master-03
> 192.168.110.24 k8s-node-01
> 192.168.110.25 k8s-node-02
> 192.168.110.26 k8s-node-03
> eof
  • k8s-master-01生成密钥,其他节点可以免密钥访问

[root@k8s-master-01 ~]# ssh-keygen -f ~/.ssh/id_rsa -n '' -q
[root@k8s-master-01 ~]# ssh-copy-id k8s-master-02
[root@k8s-master-01 ~]# ssh-copy-id k8s-master-03
[root@k8s-master-01 ~]# ssh-copy-id k8s-node-01
[root@k8s-master-01 ~]# ssh-copy-id k8s-node-02
[root@k8s-master-01 ~]# ssh-copy-id k8s-node-03
  • 配置ntp时间同步

[root@k8s-all ~]# sed -i '3,6 s/^/# /' /etc/chrony.conf
[root@k8s-all ~]# sed -i '6 a server ntp.aliyun.com iburst' /etc/chrony.conf 
[root@k8s-all ~]# systemctl restart chronyd.service 
[root@k8s-all ~]# chronyc sources
210 number of sources = 1
ms name/ip address         stratum poll reach lastrx last sample               
===============================================================================
^* 203.107.6.88                  2   6    17    13   -230us[-2619us] +/-   25ms
  • 禁用swap交换分区

[root@k8s-master-01 ~]# swapoff -a   #临时关闭
[root@k8s-all ~]# sed -i 's/.*swap.*/# &/' /etc/fstab  #永久关闭
  • 升级操作系统内核

[root@k8s-all ~]# rpm --import https://www.elrepo.org/rpm-gpg-key-elrepo.org
[root@k8s-all ~]# yum install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm -y 
[root@k8s-all ~]# yum --enablerepo="elrepo-kernel" install kernel-ml.x86_64 -y
[root@k8s-all ~]# uname -r
3.10.0-1160.71.1.el7.x86_64
[root@k8s-all ~]# grub2-set-default 0
[root@k8s-all ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
[root@k8s-all ~]# reboot
[root@k8s-all ~]# uname -r
6.8.9-1.el7.elrepo.x86_64
  • 配置内核转发及网桥过滤

[root@k8s-all ~]# echo net.ipv4.ip_forward = 1 >> /etc/sysctl.conf 
[root@k8s-all ~]# sysctl -p
net.ipv4.ip_forward = 1
[root@k8s-all ~]# cat > /etc/sysctl.d/k8s.conf << eof
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> vm.swappiness = 0
> eof
[root@k8s-all ~]# modprobe br-netfilter
[root@k8s-all ~]# sysctl -p /etc/sysctl.d/k8s.conf
  • 开启ipvs

[root@k8s-all ~]# yum install ipset ipvsadm -y
[root@k8s-all ~]# vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
​
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_vip ip_vs_sed ip_vs_ftp nf_conntrack"
​
for kernel_module in $ipvs_modules; 
do
        /sbin/modinfo -f filename $kernel_module >/dev/null 2>&1
        if [ $? -eq 0 ]; then
                /sbin/modprobe $kernel_module
        fi
done
​
chmod 755 /etc/sysconfig/modules/ipvs.modules
​
[root@k8s-all ~]# bash /etc/sysconfig/modules/ipvs.modules
  • 配置国内镜像源

[root@k8s-all ~]# cat >> /etc/yum.repos.d/kubernetes.repo <<eof
[kubernetes]
name=kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
eof
  • 安装软件包

[root@k8s-all ~]# yum install kubeadm kubelet kubectl -y  #这里安装版本为1.28.2
[root@k8s-all ~]# kubeadm version
kubeadm version: &version.info{major:"1", minor:"28", gitversion:"v1.28.2", gitcommit:"89a4ea3e1e4ddd7f7572286090359983e0387b2f", gittreestate:"clean", builddate:"2023-09-13t09:34:32z", goversion:"go1.20.8", compiler:"gc", platform:"linux/amd64"}

#为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性,修改如下文件内容

[root@k8s-all ~]# cat <<eof > /etc/sysconfig/kubelet
kubelet_extra_args="--cgroup-driver=systemd"
kube_proxy_mode="ipvs"
eof

[root@k8s-all ~]# systemctl enable kubelet.service 
  • kubectl命令自动补全

[root@k8s-all ~]# yum install -y bash-completion
[root@k8s-all ~]# source /usr/share/bash-completion/bash_completion
[root@k8s-all ~]# source <(kubectl completion bash)
[root@k8s-all ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc

1.1.3 容器运行时工具安装及运行

containerd安装部署

  • 安装基本工具

[root@k8s-all ~]# yum install yum-utils device-mapper-persistent-data lvm2 -y
  • 下载docker-ce的源

[root@k8s-all ~]# wget -o /etc/yum.repos.d/docker-ce.repo https://mirrors.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo
  • 替换仓库源

[root@k8s-all ~]# sed -i 's+download.docker.com+mirrors.huaweicloud.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo 
[root@k8s-all ~]# sed -i 's/$releasever/7server/g' /etc/yum.repos.d/docker-ce.repo
  • 安装containerd

[root@k8s-all ~]# yum install containerd -y
  • 初始化默认配置

[root@k8s-all ~]# containerd config default | tee /etc/containerd/config.toml
[root@k8s-all ~]# sed -i "s#systemdcgroup\ \=\ false#systemdcgroup\ \=\ true#g" /etc/containerd/config.toml
[root@k8s-all ~]# sed -i "s#registry.k8s.io#registry.aliyuncs.com/google_containers#g" /etc/containerd/config.toml

#配置crictl
[root@k8s-all ~]# cat <<eof | tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
eof
[root@k8s-all ~]# systemctl daemon-reload
[root@k8s-all ~]# systemctl enable containerd --now
  • 测试

[root@k8s-node-all ~]# crictl pull nginx:alpine
image is up to date for sha256:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499
[root@k8s-node-all ~]# crictl images
image                     tag                 image id            size
docker.io/library/nginx   alpine              f4215f6ee683f       20.5mb
[root@k8s-node-all ~]# crictl rmi nginx:alpine
deleted: docker.io/library/nginx:alpine

1.1.4 master 节点部署高可用

  • 安装nginx和keepalived

[root@k8s-master-all ~]# wget -c https://nginx.org/packages/rhel/7/x86_64/rpms/nginx-1.24.0-1.el7.ngx.x86_64.rpm
[root@k8s-master-all ~]# yum install nginx-1.24.0-1.el7.ngx.x86_64.rpm keepalived.x86_64 -y
  • 修改 nginx 配置文件

3 台 master 节点修改/etc/nginx/nginx.conf 配置文件,在envents位置后面添加stream部分内容:

[root@k8s-master-01 ~]# vim /etc/nginx/nginx.conf 
events {
    worker_connections  1024;
}

stream {
 log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
 access_log /var/log/nginx/k8s-access.log main;
 upstream k8s-apiserver {
   server 192.168.110.21:6443;
   server 192.168.110.22:6443;
   server 192.168.110.23:6443;
 }
 server {
   listen 16443;
   proxy_pass k8s-apiserver;
 }
}
  • 修改 keepalived 配置文件

覆盖修改 k8s-master-01 节点配置文件/etc/keepalived/keepalived.conf

[root@k8s-master-01 ~]# cat > /etc/keepalived/keepalived.conf<<eof
! configuration file for keepalived
global_defs {
 router_id master1
 script_user root
 enable_script_security
}

vrrp_script check_nginx {
  script "/etc/keepalived/check_nginx.sh"
  interval 3
  fall 3
  rise 2
}

vrrp_instance nginx {
 state master
 interface ens33
 virtual_router_id 51
 priority 200
 advert_int 1
 authentication {
   auth_type pass
   auth_pass xczkxy
 }
 track_script {
   check_nginx
 }
 virtual_ipaddress {
   192.168.110.20/24
 }
}
eof
  • 编写健康检测脚本

[root@k8s-master-01 ~]# cat > /etc/keepalived/check_nginx.sh<<eof
#!/bin/sh
# nginx down
pid=`ps -c nginx --no-header | wc -l`
if [ $pid -eq 0 ]
then
    systemctl start nginx
  sleep 5
    if [ `ps -c nginx --no-header | wc -l` -eq 0 ]
    then
        systemctl stop nginx
    else
      exit 0
    fi
fi
eof

[root@k8s-master-01 ~]# chmod +x /etc/keepalived/check_nginx.sh
[root@k8s-master-01 ~]# scp /etc/keepalived/{check_nginx.sh,keepalived.conf} k8s-master-02:/etc/keepalived/
[root@k8s-master-01 ~]# scp /etc/keepalived/{check_nginx.sh,keepalived.conf} k8s-master-03:/etc/keepalived/

复制文件和脚本,k8s-master-02 和 k8s-master-03 配置如上,注意字段 state 修改为 backup,降低 priority,例如
k8s-master-02 的 priority 值为 150,k8s-master-03 的 priority 值为 100。
  • 修改其他两台master

[root@k8s-master-02 ~]# sed -i 's/master/backup/' /etc/keepalived/keepalived.conf 
[root@k8s-master-02 ~]# sed -i 's/200/150/' /etc/keepalived/keepalived.conf

[root@k8s-master-03 ~]# sed -i 's/master/backup/' /etc/keepalived/keepalived.conf 
[root@k8s-master-03 ~]# sed -i 's/200/100/' /etc/keepalived/keepalived.conf
  • 启动 nginx 和 keepalived

[root@k8s-master-all ~]# systemctl enable nginx --now
[root@k8s-master-all ~]# systemctl enable keepalived --now
  • 高可用切换验证

[root@k8s-master-01 ~]# ip a | grep 192.168.110.20/24
    inet 192.168.110.20/24 scope global secondary ens33
    
# 模拟keepalived宕机

[root@k8s-master-01 ~]# systemctl stop keepalived

[root@k8s-master-02 ~]# ip a | grep 192.168.110.20/24   # vip漂移到master-02
    inet 192.168.110.20/24 scope global secondary ens33
[root@k8s-master-03 ~]# ip a | grep 192.168.110.20/24   # master-02宕机
    inet 192.168.110.20/24 scope global secondary ens33
    
[root@k8s-master-03 ~]# ip a | grep 192.168.110.20/24
    inet 192.168.110.20/24 scope global secondary ens33  # vip漂移到master-03
    
[root@k8s-master-01 ~]# systemctl start keepalived.service  # 恢复后正常
[root@k8s-master-01 ~]# ip a | grep 192.168.110.20/24
    inet 192.168.110.20/24 scope global secondary ens33
     
[root@k8s-all ~]# ping -c 2 192.168.110.20   #确保集群内部可以通讯
ping 192.168.110.20 (192.168.110.20) 56(84) bytes of data.
64 bytes from 192.168.110.20: icmp_seq=1 ttl=64 time=1.03 ms
64 bytes from 192.168.110.20: icmp_seq=2 ttl=64 time=2.22 ms

--- 192.168.110.20 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1018ms
rtt min/avg/max/mdev = 1.034/1.627/2.220/0.593 ms

1.1.5 集群初始化

# 创建初始化文件 kubeadm-init.yaml
[root@k8s-master-01 ~]# kubeadm config print init-defaults > kubeadm-init.yaml  
[root@k8s-master-01 ~]# sed -i 's/1.2.3.4/192.168.110.21/' kubeadm-init.yaml    # 控制平面地址修改为master主机
[root@k8s-master-01 ~]# sed -i 's/: node/: k8s-master-01/' kubeadm-init.yaml    # 修改名称
[root@k8s-master-01 ~]# sed -i '24 a controlplaneendpoint: 192.168.110.20:16443' kubeadm-init.yaml   # 添加虚拟ip
[root@k8s-master-01 ~]# sed -i 's#registry.k8s.io#registry.aliyuncs.com/google_containers#' kubeadm-init.yaml  # 替换镜像源
[root@k8s-master-01 ~]# sed -i 's/1.28.0/1.28.2/' kubeadm-init.yaml   # 替换为安装的斑斑
[root@k8s-master-01 ~]# cat >> kubeadm-init.yaml << eof   # 开启ipvs
---
apiversion: kubeproxy.config.k8s.io/v1alpha1
kind: kubeproxyconfiguration
mode: ipvs
eof

注意:如果使用docker做容器运行时,需要修改套接字文件为unix:///var/run/cri-dockerd.sock
  • 根据配置文件启动 kubeadm 初始化 k8s

[root@k8s-master-01 ~]# kubeadm init --config=kubeadm-init.yaml --upload-certs --v=6 --ignore-preflight-errors="filecontent--proc-sys-net-bridge-bridge-nf-call-iptables"   
# 这里忽略防火墙报的错

your kubernetes control-plane has initialized successfully!

to start using your cluster, you need to run the following as a regular user:

  mkdir -p $home/.kube
  sudo cp -i /etc/kubernetes/admin.conf $home/.kube/config
  sudo chown $(id -u):$(id -g) $home/.kube/config

alternatively, if you are the root user, you can run:

  export kubeconfig=/etc/kubernetes/admin.conf

you should now deploy a pod network to the cluster.
run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

you can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.110.20:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:28d3d73a81af5468649d287d7269a3c3eca9aa3990c3897f4db04aac805de38a \
        --control-plane --certificate-key 6f7027ad52defa430f33c5d25e7a0d4c1a96f3c316bb58d564e04f04330e6fb5

please note that the certificate-key gives access to cluster sensitive data, keep it secret!
as a safeguard, uploaded-certs will be deleted in two hours; if necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.110.20:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:28d3d73a81af5468649d287d7269a3c3eca9aa3990c3897f4db04aac805de38a
[root@k8s-master-01 ~]# mkdir -p $home/.kube
[root@k8s-master-01 ~]# sudo cp -i /etc/kubernetes/admin.conf $home/.kube/config
[root@k8s-master-01 ~]# sudo chown $(id -u):$(id -g) $home/.kube/config

1.1.6 其他master节点加入

  • 在 k8s-master-02 和 k8s-master-03 创建用于存放证书的文件夹

[root@k8s-master-02 ~]# mkdir -p /etc/kubernetes/pki/etcd
[root@k8s-master-03 ~]# mkdir -p /etc/kubernetes/pki/etcd
  • 传递证书到 k8s-master02 和 k8s-master03 节点

[root@k8s-master-01 ~]# scp /etc/kubernetes/pki/ca.* root@k8s-master-02:/etc/kubernetes/pki/
[root@k8s-master-01 ~]# scp /etc/kubernetes/pki/ca.* root@k8s-master-03:/etc/kubernetes/pki/
   
[root@k8s-master-01 ~]# scp /etc/kubernetes/pki/sa.* root@k8s-master-02:/etc/kubernetes/pki/  
[root@k8s-master-01 ~]# scp /etc/kubernetes/pki/sa.* root@k8s-master-03:/etc/kubernetes/pki/ 

[root@k8s-master-01 ~]# scp /etc/kubernetes/pki/front-proxy-ca.* root@k8s-master-02:/etc/kubernetes/pki/
[root@k8s-master-01 ~]# scp /etc/kubernetes/pki/front-proxy-ca.* root@k8s-master-03:/etc/kubernetes/pki/
  
[root@k8s-master-01 ~]# scp /etc/kubernetes/pki/etcd/ca.* root@k8s-master-02:/etc/kubernetes/pki/etcd/
[root@k8s-master-01 ~]# scp /etc/kubernetes/pki/etcd/ca.* root@k8s-master-03:/etc/kubernetes/pki/etcd/

# 这个文件master和node节点都需要
[root@k8s-master-01 ~]# scp  /etc/kubernetes/admin.conf k8s-master-02:/etc/kubernetes/ 
[root@k8s-master-01 ~]# scp  /etc/kubernetes/admin.conf k8s-master-03:/etc/kubernetes/ 
[root@k8s-master-01 ~]# scp  /etc/kubernetes/admin.conf k8s-node-01:/etc/kubernetes/
[root@k8s-master-01 ~]# scp  /etc/kubernetes/admin.conf k8s-node-02:/etc/kubernetes/  
[root@k8s-master-01 ~]# scp  /etc/kubernetes/admin.conf k8s-node-03:/etc/kubernetes/    
  • 其他 master 节点加入集群

[root@k8s-master-02 ~]# kubeadm join 192.168.110.20:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:28d3d73a81af5468649d287d7269a3c3eca9aa3990c3897f4db04aac805de38a \
--control-plane --certificate-key 6f7027ad52defa430f33c5d25e7a0d4c1a96f3c316bb58d564e04f04330e6fb5 \
--ignore-preflight-errors="filecontent--proc-sys-net-bridge-bridge-nf-call-iptables,filecontent--proc-sys-net-ipv4-ip_forward"

to start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $home/.kube
        sudo cp -i /etc/kubernetes/admin.conf $home/.kube/config
        sudo chown $(id -u):$(id -g) $home/.kube/config

run 'kubectl get nodes' to see this node join the cluster.


[root@k8s-master-03 ~]# kubeadm join 192.168.110.20:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:28d3d73a81af5468649d287d7269a3c3eca9aa3990c3897f4db04aac805de38a \
--control-plane --certificate-key 6f7027ad52defa430f33c5d25e7a0d4c1a96f3c316bb58d564e04f04330e6fb5 \
--ignore-preflight-errors="filecontent--proc-sys-net-bridge-bridge-nf-call-iptables,filecontent--proc-sys-net-ipv4-ip_forward"

to start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $home/.kube
        sudo cp -i /etc/kubernetes/admin.conf $home/.kube/config
        sudo chown $(id -u):$(id -g) $home/.kube/config

run 'kubectl get nodes' to see this node join the cluster.
  • 在 k8s-master-02 和 k8s-master-03 节点执行以下命令,复制 kubeconfig 文件

[root@k8s-master-02 ~]# mkdir -p $home/.kube
[root@k8s-master-02 ~]# sudo cp -i /etc/kubernetes/admin.conf $home/.kube/config
[root@k8s-master-02 ~]# sudo chown $(id -u):$(id -g) $home/.kube/config

[root@k8s-master-03 ~]# mkdir -p $home/.kube
[root@k8s-master-03 ~]# sudo cp -i /etc/kubernetes/admin.conf $home/.kube/config
[root@k8s-master-03 ~]# sudo chown $(id -u):$(id -g) $home/.kube/config

1.1.7 工作节点加入集群

注意:docker做容器运行时需要添加 --cri-socket unix:///var/run/cri-dockerd.sock

[root@k8s-node-all ~]# kubeadm join 192.168.110.20:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:28d3d73a81af5468649d287d7269a3c3eca9aa3990c3897f4db04aac805de38a \
--ignore-preflight-errors="filecontent--proc-sys-net-bridge-bridge-nf-call-iptables,filecontent--proc-sys-net-ipv4-ip_forward"

# 查看node
[root@k8s-master-01 ~]# kubectl get nodes
name            status     roles           age     version
k8s-master-01   notready   control-plane   7m19s   v1.28.2
k8s-master-02   notready   control-plane   80s     v1.28.2
k8s-master-03   notready   control-plane   2m1s    v1.28.2
k8s-node-01     notready   <none>          13s     v1.28.2
k8s-node-02     notready   <none>          13s     v1.28.2
k8s-node-03     notready   <none>          13s     v1.28.2

1.1.8 安装集群网络插件

[root@k8s-master-01 ~]# wget -c https://calico-v3-25.netlify.app/archive/v3.25/manifests/calico.yaml
[root@k8s-master-01 ~]# vim calico.yaml
# 以下两行默认没有开启,开始后修改第二行为kubeadm初始化使用指定的pod network即可。
4600             # no effect. this should fall within `--cluster-cidr`.
4601             - name: calico_ipv4pool_cidr
4602               value: "10.224.0.0/16"
4603             # disable file logging so `kubectl logs` works.

[root@k8s-master-01 ~]# kubectl apply -y calico.yaml

[root@k8s-master-01 ~]# kubectl get node
name            status   roles           age     version
k8s-master-01   ready    control-plane   14m     v1.28.2
k8s-master-02   ready    control-plane   10m     v1.28.2
k8s-master-03   ready    control-plane   10m     v1.28.2
k8s-node-01     ready    <none>          9m50s   v1.28.2
k8s-node-02     ready    <none>          9m50s   v1.28.2
k8s-node-03     ready    <none>          9m50s   v1.28.2

[root@k8s-master-01 ~]# kubectl get pod -n kube-system 
name                                       ready   status    restarts   age
calico-kube-controllers-658d97c59c-8mw6g   1/1     running   0          5m47s
calico-node-5hmzv                          1/1     running   0          5m47s
calico-node-5qpwt                          1/1     running   0          5m47s
calico-node-9r72w                          1/1     running   0          5m47s
calico-node-xgds5                          1/1     running   0          5m47s
calico-node-z56q4                          1/1     running   0          5m47s
calico-node-zcrrw                          1/1     running   0          5m47s
coredns-66f779496c-9jldk                   1/1     running   0          13m
coredns-66f779496c-dxsgv                   1/1     running   0          13m
etcd-k8s-master-01                         1/1     running   0          13m
etcd-k8s-master-02                         1/1     running   0          9m33s
etcd-k8s-master-03                         1/1     running   0          9m39s
kube-apiserver-k8s-master-01               1/1     running   0          13m
kube-apiserver-k8s-master-02               1/1     running   0          9m48s
kube-apiserver-k8s-master-03               1/1     running   0          9m39s
kube-controller-manager-k8s-master-01      1/1     running   0          13m
kube-controller-manager-k8s-master-02      1/1     running   0          9m49s
kube-controller-manager-k8s-master-03      1/1     running   0          9m36s
kube-proxy-565m4                           1/1     running   0          8m56s
kube-proxy-hfk4w                           1/1     running   0          8m56s
kube-proxy-n94sx                           1/1     running   0          8m56s
kube-proxy-s75md                           1/1     running   0          9m49s
kube-proxy-t5xl2                           1/1     running   0          9m48s
kube-proxy-vqkxk                           1/1     running   0          13m
kube-scheduler-k8s-master-01               1/1     running   0          13m
kube-scheduler-k8s-master-02               1/1     running   0          9m48s
kube-scheduler-k8s-master-03               1/1     running   0          9m39s

# 如果镜像拉不下来,可以尝试手动拉
[root@k8s-master-01 ~]# crictl pull docker.io/calico/cni:v3.25.0
[root@k8s-master-01 ~]# crictl pull docker.io/calico/node:v3.25.0
[root@k8s-master-01 ~]# crictl pull docker.io/calico/kube-controllers:v3.25.0

1.1.9 应用部署验证及访问验证

  • 下载etcdctl客户端工具

[root@k8s-master-01 ~]# wget -c https://github.com/etcd-io/etcd/releases/download/v3.4.29/etcd-v3.4.29-linux-amd64.tar.gz
[root@k8s-master-01 ~]# tar xf etcd-v3.4.29-linux-amd64.tar.gz -c /usr/local/src/
[root@k8s-master-01 ~]# mv /usr/local/src/etcd-v3.4.29-linux-amd64/etcdctl /usr/local/bin/
  • 查看etcd集群健康状态

[root@k8s-master-01 ~]# etcdctl_api=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=k8s-master-01:2379,k8s-master-02:2379,k8s-master-03:2379 endpoint health
+--------------------+--------+-------------+-------+
|      endpoint      | health |    took     | error |
+--------------------+--------+-------------+-------+
| k8s-master-01:2379 |   true | 22.678298ms |       |
| k8s-master-02:2379 |   true | 22.823849ms |       |
| k8s-master-03:2379 |   true | 28.292332ms |       |
+--------------------+--------+-------------+-------+
  • 查看etcd集群可用列表

[root@k8s-master-01 ~]# etcdctl_api=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=k8s-master-01:2379,k8s-master-02:2379,k8s-master-03:2379 member list
+------------------+---------+---------------+-----------------------------+-----------------------------+------------+
|        id        | status  |     name      |         peer addrs          |        client addrs         | is learner |
+------------------+---------+---------------+-----------------------------+-----------------------------+------------+
| 409450d991a8d0ba | started | k8s-master-03 | https://192.168.110.23:2380 | https://192.168.110.23:2379 |      false |
| a1a70c91a1d895bf | started | k8s-master-01 | https://192.168.110.21:2380 | https://192.168.110.21:2379 |      false |
| cc2c3b0e11f3279a | started | k8s-master-02 | https://192.168.110.22:2380 | https://192.168.110.22:2379 |      false |
+------------------+---------+---------------+-----------------------------+-----------------------------+------------+
  • 查看etcd集群leader状态

[root@k8s-master-01 ~]# etcdctl_api=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=k8s-master-01:2379,k8s-master-02:2379,k8s-master-03:2379 endpoint status
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|      endpoint      |        id        | version | db size | is leader | is learner | raft term | raft index | raft applied index | errors |
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| k8s-master-01:2379 | a1a70c91a1d895bf |   3.5.9 |  5.1 mb |      true |      false |         3 |       5124 |               5124 |        |
| k8s-master-02:2379 | cc2c3b0e11f3279a |   3.5.9 |  5.1 mb |     false |      false |         3 |       5124 |               5124 |        |
| k8s-master-03:2379 | 409450d991a8d0ba |   3.5.9 |  5.1 mb |     false |      false |         3 |       5124 |               5124 |        |
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

1.1.10 部署应用及验证

[root@k8s-master-01 ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@k8s-master-01 ~]# kubectl expose deployment nginx --port=80 --type=nodeport
service/nginx exposed
[root@k8s-master-01 ~]# kubectl get pod,svc
name                         ready   status    restarts   age
pod/nginx-7854ff8877-7wxc8   1/1     running   0          73s

name                 type        cluster-ip     external-ip   port(s)        age
service/kubernetes   clusterip   10.96.0.1      <none>        443/tcp        28m
service/nginx        nodeport    10.102.85.72   <none>        80:31585/tcp   72s

image-20240517224853971

(0)

相关文章:

版权声明:本文内容由互联网用户贡献,该文观点仅代表作者本人。本站仅提供信息存储服务,不拥有所有权,不承担相关法律责任。 如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 2386932994@qq.com 举报,一经查实将立刻删除。

发表评论

验证码:
Copyright © 2017-2025  代码网 保留所有权利. 粤ICP备2024248653号
站长QQ:2386932994 | 联系邮箱:2386932994@qq.com