1. 基础环境
主机名称 | ip地址 |
master1 | 10.66.6.2 |
node1 | 10.66.6.4 |
node2 | 10.66.6.5 |
说明:
master节点为2台以nginx为代理实现高可用
系统使用ubuntu 20.04
2. 基础环境配置
通过公钥做ssh免密.
2.1 所有节点配置hosts
10.66.6.2 master1 10.66.6.4 node1 10.66.6.5 node2
2.2 关闭防火墙,selinux,dnsmasq,swap
#关闭防火墙 systemctl disable --now firewalld #关闭dnsmasq systemctl disable --now dnsmasq #关闭postfix systemctl disable --now postfix #关闭networkmanager systemctl disable --now networkmanager #关闭selinux sed -ri 's/(^selinux=).*/\1disabled/' /etc/selinux/config setenforce 0 #关闭swap sed -ri 's@(^.*swap *swap.*0 0$)@#\1@' /etc/fstab swapoff -a
2.3 配置时间同步
#安装ntpdate apt-get install ntpdate -y #执行同步,可以使用自己的ntp服务器如果没有 ntpdate ntp1.aliyun.com #添加定时任务 crontab -e 0 */1 * * * ntpdate ntp1.aliyun.com
2.4 节点修改资源限制
cat > /etc/security/limits.conf <<eof * soft core unlimited * hard core unlimited * soft nproc 1000000 * hard nproc 1000000 * soft nofile 1000000 * hard nofile 1000000 * soft memlock 32000 * hard memlock 32000 * soft msgqueue 8192000 eof
2.5 安装基本软件
apt-get install ipvsadm ipset conntrack sysstat libseccomp psmisc vim net-tools nfs-kernel-server telnet lvm2 git tar curl -y
2.6 升级系统内核
#查看系统内核 uname -r #查看软件库中内核 sudo apt list | grep linux-generic* #下载内核 apt-get install linux-generic-hwe-20.04-edge/focal-updates #下载脚本 wget https://raw.githubusercontent.com/pimlie/ubuntu-mainline-kernel.sh/master/ubuntu-mainline-kernel.sh #把脚本放在可执行路径 install ubuntu-mainline-kernel.sh /usr/local/bin/ #检查最新的可用内核版本 ubuntu-mainline-kernel.sh -c #获得最新版本并确认这就是您想要安装在系统上的版本之后,运行 ubuntu-mainline-kernel.sh -i #重启服务器后确认 reboot uname -rs
2.7 修改内核参数
cat >/etc/sysctl.conf<<eof net.ipv4.tcp_keepalive_time=600 net.ipv4.tcp_keepalive_intvl=30 net.ipv4.tcp_keepalive_probes=10 net.ipv6.conf.all.disable_ipv6=1 net.ipv6.conf.default.disable_ipv6=1 net.ipv6.conf.lo.disable_ipv6=1 net.ipv4.neigh.default.gc_stale_time=120 net.ipv4.conf.all.rp_filter=0 # 默认为1,系统会严格校验数据包的反向路径,可能导致丢包 net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.default.arp_announce=2 net.ipv4.conf.lo.arp_announce=2 net.ipv4.conf.all.arp_announce=2 net.ipv4.ip_local_port_range= 45001 65000 net.ipv4.ip_forward=1 net.ipv4.tcp_max_tw_buckets=6000 net.ipv4.tcp_syncookies=1 net.ipv4.tcp_synack_retries=2 net.bridge.bridge-nf-call-ip6tables=1 net.bridge.bridge-nf-call-iptables=1 net.netfilter.nf_conntrack_max=2310720 net.ipv6.neigh.default.gc_thresh1=8192 net.ipv6.neigh.default.gc_thresh2=32768 net.ipv6.neigh.default.gc_thresh3=65536 net.core.netdev_max_backlog=16384 # 每cpu网络设备积压队列长度 net.core.rmem_max = 16777216 # 所有协议类型读写的缓存区大小 net.core.wmem_max = 16777216 net.ipv4.tcp_max_syn_backlog = 8096 # 第一个积压队列长度 net.core.somaxconn = 32768 # 第二个积压队列长度 fs.inotify.max_user_instances=8192 # 表示每一个real user id可创建的inotify instatnces的数量上限,默认128. fs.inotify.max_user_watches=524288 # 同一用户同时可以添加的watch数目,默认8192。 fs.file-max=52706963 fs.nr_open=52706963 kernel.pid_max = 4194303 net.bridge.bridge-nf-call-arptables=1 vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 oom 时才允许使用它 vm.overcommit_memory=1 # 不检查物理内存是否够用 vm.panic_on_oom=0 # 开启 oom vm.max_map_count = 262144 eof
2.8 加载ipvs模块
cat >/etc/modules-load.d/ipvs.conf <<eof ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_reject eof systemctl enable --now systemd-modules-load.service #重启服务器执行检查 lsmod | grep -e ip_vs -e nf_conntrack
3. 软件包准备
以下均为github下载地址
- kubernetes 1.25.6 地址
https://dl.k8s.io/v1.25.6/kubernetes-server-linux-amd64.tar.gz
- etcd 地址
https://github.com/etcd-io/etcd/releases/download/v3.5.7/etcd-v3.5.7-linux-amd64.tar.gz
- docker-ce 地址
https://github.com/containerd/containerd/releases
- cri-docker 地址
https://github.com/mirantis/cri-dockerd/releases
- containerd 地址
https://github.com/containerd/containerd/releases
- cfssl 地址
https://github.com/cloudflare/cfssl/releases
4. 安装docker,cri-docker
4.1 安装docker-ce
tar xf docker-23.0.1.tgz cp docker/* /usr/bin
container启动文件
cat > /usr/lib/systemd/system/containerd.service << eof [unit] description=containerd container runtime documentation=https://containerd.io after=network.target local-fs.target [service] execstartpre=-/sbin/modprobe overlay execstart=/usr/bin/containerd type=notify delegate=yes killmode=process restart=always restartsec=5 limitnproc=infinity limitcore=infinity limitnofile=1048576 tasksmax=infinity oomscoreadjust=-999 [install] wantedby=multi-user.target eof
docker 启动文件
cat > /usr/lib/systemd/system/docker.service << eof [unit] description=docker application container engine documentation=https://docs.docker.com after=network-online.target firewalld.service containerd.service wants=network-online.target requires=docker.socket containerd.service [service] type=notify execstart=/usr/bin/dockerd -h fd:// --containerd=/run/containerd/containerd.sock execreload=/bin/kill -s hup timeoutsec=0 restartsec=2 restart=always startlimitburst=3 startlimitinterval=60s limitnofile=infinity limitnproc=infinity limitcore=infinity tasksmax=infinity delegate=yes killmode=process oomscoreadjust=-500 [install] wantedby=multi-user.target eof
docker的socket文件
cat > /usr/lib/systemd/system/ << eof [unit] description=docker socket for the api [socket] listenstream=/var/run/docker.sock socketmode=0660 socketuser=root socketgroup=docker [install] wantedby=sockets.target eof
创建docker配置文件
cat > /etc/docker/daemon.json << eof { "exec-opts": ["native.cgroupdriver=systemd"] } eof
启动添加开机自启动
groupadd docker
systemctl enable --now containerd.service systemctl enable --now docker.socket systemctl enable --now docker.service
4.2 安装cri-docker
tar xf cri-dockerd-0.3.1.amd64.tgz cp cri-dockerd/* /usr/bin
创建启动文件
cat > /usr/lib/systemd/system/cri-docker.service << eof [unit] description=cri interface for docker application container engine documentation=https://docs.mirantis.com after=network-online.target firewalld.service docker.service wants=network-online.target requires=cri-docker.socket [service] type=notify execstart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=kubernetes/pause:latest execreload=/bin/kill -s hup timeoutsec=0 restartsec=2 restart=always # note that startlimit* options were moved from "service" to "unit" in systemd 229. # both the old, and new location are accepted by systemd 229 and up, so using the old location # to make them work for either version of systemd. startlimitburst=3 # note that startlimitinterval was renamed to startlimitintervalsec in systemd 230. # both the old, and new name are accepted by systemd 230 and up, so using the old name to make # this option work for either version of systemd. startlimitinterval=60s # having non-zero limit*s causes performance problems due to accounting overhead # in the kernel. we recommend using cgroups to do container-local accounting. limitnofile=infinity limitnproc=infinity limitcore=infinity # comment tasksmax if your systemd version does not support it. # only systemd 226 and above support this option. tasksmax=infinity delegate=yes killmode=process [install] wantedby=multi-user.target eof
创建cri-docker socker文件
cat > /usr/lib/systemd/system/cri-docker.socket << eof [unit] description=cri docker socket for the api partof=cri-docker.service [socket] listenstream=%t/cri-dockerd.sock socketmode=0660 socketuser=root socketgroup=docker [install] wantedby=sockets.target eof
添加开机自启动,并启动
systemctl enable --now cri-docker.socket systemctl enable --now cri-docker
4.3 安装containerd
tar xf containerd-1.6.19-linux-amd64.tar.gz -c /
cat > /usr/lib/systemd/system/containerd.service <<eof [unit] description=containerd container runtime documentation=https://containerd.io after=network.target local-fs.target [service] execstartpre=-/sbin/modprobe overlay execstart=/usr/bin/containerd type=notify delegate=yes killmode=process restart=always restartsec=5 limitnproc=infinity limitcore=infinity limitnofile=1048576 tasksmax=infinity oomscoreadjust=-999 [install] wantedby=multi-user.target eof
systemctl enable --now containerd.service
添加配置重启
mkdir /etc/containerd /usr/local/bin/containerd config default > /etc/containerd/config.toml systemctl restart containerd
4.4 安装crictl 客户工具
#解压 tar xf crictl-v1.22.0-linux-amd64.tar.gz -c /usr/bin/ #生成配置文件 cat > /etc/crictl.yaml <<eof runtime-endpoint: unix:///run/containerd/containerd.sock eof #测试 crictl info
4.5 安装cfssl工具
#主节点操作
tar xf cfssl-1.6.3.tar.gz -c /usr/bin mkdir /opt/pki/{etcd,kubernetes} -p
5. 生成kubernetes集群证书
在主节点操作
5.1 生成etcd的ca证书
mkdir /opt/pki/etcd/ -p cd /opt/pki/etcd/ #创建etcd证书的ca mkdir ca #生成etcd证书ca配置文件与申请文件 cd ca/
生成配置文件
cat > ca-config.json <<eof { "signing": { "default": { "expiry": "87600h" }, "profiles": { "etcd": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } eof #生成申请文件 cat > ca-csr.json <<eof { "ca":{"expiry":"87600h"}, "cn": "etcd-cluster", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "c": "cn", "ts": "beijing", "l": "beijing", "o": "etcd-cluster", "ou": "system" } ] } eof #生成ca证书 cfssl gencert -initca ca-csr.json | cfssljson -bare ca
生成etcd服务端证书
cat > etcd-server-csr.json << eof { "cn": "etcd-server", "hosts": [ "10.66.6.2", "10.66.6.3", "10.66.6.4", "10.66.6.5", "10.66.6.6", "127.0.0.1" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "c": "cn", "ts": "beijing", "l": "beijing", "o": "etcd-server", "ou": "system" } ] } eof
#生成证书 cfssl gencert \ -ca=ca/ca.pem \ -ca-key=ca/ca-key.pem \ -config=ca/ca-config.json \ -profile=etcd \ etcd-server-csr.json | cfssljson -bare etcd-server
生成etcd客户端证书
#生成etcd证书申请文件 cd /opt/pki/etcd/ cat > etcd-client-csr.json << eof { "cn": "etcd-client", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "c": "cn", "ts": "beijing", "l": "beijing", "o": "etcd-client", "ou": "system" } ] } eof
#生成证书 cfssl gencert \ -ca=ca/ca.pem \ -ca-key=ca/ca-key.pem \ -config=ca/ca-config.json \ -profile=etcd \ etcd-client-csr.json | cfssljson -bare etcd-client
拷贝证书到master和node节点
for i in $master;do ssh $i "mkdir /etc/etcd/ssl -p" scp /opt/pki/etcd/ca/ca.pem /opt/pki/etcd/{etcd-server.pem,etcd-server-key.pem,etcd-client.pem,etcd-client-key.pem} $i:/etc/etcd/ssl/ done
5.2 创建kubernetes各组件证书
5.2.1 创建kubernetes的ca
mkdir /opt/pki/kubernetes/ -p cd /opt/pki/kubernetes/ mkdir ca cd ca
创建ca配置文件
cat > ca-config.json <<eof { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } eof
生成ca申请文件
cat > ca-csr.json <<eof { "ca":{"expiry":"87600h"}, "cn": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "c": "cn", "ts": "beijing", "l": "beijing", "o": "kubernetes", "ou": "system" } ] } eof
生成ca证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
5.3 创建kueb-apiserver证书
mkdir /opt/pki/kubernetes/kube-apiserver -p cd /opt/pki/kubernetes/kube-apiserver
生成申请文件
cat > kube-apiserver-csr.json < eof < eof { "cn": "kube-apiserver", "hosts": [ "127.0.0.1", "10.66.6.2", "10.66.6.3", "10.66.6.4", "10.66.6.5", "10.66.6.6", "10.66.6.7", "10.200.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "c": "cn", "ts": "beijing", "l": "beijing", "o": "kube-apiserver", "ou": "system" } ] } eof
生成证书
cfssl gencert \ -ca=../ca/ca.pem \ -ca-key=../ca/ca-key.pem \ -config=../ca/ca-config.json \ -profile=kubernetes \ kube-apiserver-csr.json | cfssljson -bare kube-apiserver
for i in master;do ssh $i "mkdir /etc/kubernetes/pki -p" scp /opt/pki/kubernetes/ca/{ca.pem,ca-key.pem} /opt/pki/kubernetes/kube-apiserver/{kube-apiserver-key.pem,kube-apiserver.pem} $i:/etc/kubernetes/pki done
#拷贝证书到node节点
master="node1 node2" for i in $master;do ssh $i "mkdir /etc/kubernetes/pki -p" scp /opt/pki/kubernetes/ca/ca.pem $i:/etc/kubernetes/pki done
5.4 创建proxy-client证书以及ca
mkdir /opt/pki/proxy-client cd /opt/pki/proxy-client
生成ca配置文件
cat > front-proxy-ca-csr.json <<eof { "ca":{"expiry":"87600h"}, "cn": "kubernetes", "key": { "algo": "rsa", "size": 2048 } } eof
生成ca文件
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare front-proxy-ca
生成客户端证书申请文件
cat > front-proxy-client-csr.json <<eof { "cn": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 } } eof
生成证书
cfssl gencert \ -ca=front-proxy-ca.pem \ -ca-key=front-proxy-ca-key.pem \ -config=../kubernetes/ca/ca-config.json \ -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare front-proxy-client
拷贝证书到节点
for i in $master;do scp /opt/pki/proxy-client/{front-proxy-ca.pem,front-proxy-client.pem,front-proxy-client-key.pem} $i:/etc/kubernetes/pki done for i in $node;do scp /opt/pki/proxy-client/front-proxy-ca.pem $i:/etc/kubernetes/pki done
5.5 创建kube-controller-manager证书与认证文件
mkdir /opt/pki/kubernetes/kube-controller-manager cd /opt/pki/kubernetes/kube-controller-manager
生成配置文件
cat > kube-controller-manager-csr.json <<eof { "cn": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "c": "cn", "ts": "beijing", "l": "beijing", "o": "system:kube-controller-manager", "ou": "system" } ] } eof
生成证书文件
cfssl gencert \ -ca=../ca/ca.pem \ -ca-key=../ca/ca-key.pem \ -config=../ca/ca-config.json \ -profile=kubernetes \ kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
生成配置文件
kubectl config set-cluster kubernetes \ --certificate-authority=../ca/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager \ --client-certificate=kube-controller-manager.pem \ --client-key=kube-controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
拷贝证书到节点
for i in $master;do scp /opt/pki/kubernetes/kube-controller-manager/kube-controller-manager.kubeconfig $i:/etc/kubernetes done
5.6 生成kube-scheduler证书文件
mkdir /opt/pki/kubernetes/kube-scheduler cd /opt/pki/kubernetes/kube-scheduler
生成申请文件
cat > kube-scheduler-csr.json <<eof { "cn": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "c": "cn", "ts": "beijing", "l": "beijing", "o": "system:kube-scheduler", "ou": "system" } ] } eof
生成证书
cfssl gencert \ -ca=../ca/ca.pem \ -ca-key=../ca/ca-key.pem \ -config=../ca/ca-config.json \ -profile=kubernetes \ kube-scheduler-csr.json | cfssljson -bare kube-scheduler
生成配置文件
kubectl config set-cluster kubernetes \ --certificate-authority=../ca/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=kube-scheduler.pem \ --client-key=kube-scheduler-key.pem \ --embed-certs=true \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
拷贝证书到节点
for i in $master;do scp /opt/pki/kubernetes/kube-scheduler/kube-scheduler.kubeconfig $i:/etc/kubernetes done
5.7.生成kubernetes集群管理员证书
mkdir /opt/pki/kubernetes/admin cd /opt/pki/kubernetes/admin
生成证书申请文件
cat > admin-csr.json <<eof { "cn": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "c": "cn", "ts": "beijing", "l": "beijing", "o": "system:masters", "ou": "system" } ] } eof
生成证书
cfssl gencert \ -ca=../ca/ca.pem \ -ca-key=../ca/ca-key.pem \ -config=../ca/ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare admin
生成配置文件
kubectl config set-cluster kubernetes \ --certificate-authority=../ca/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=admin.kubeconfig kubectl config set-credentials admin \ --client-certificate=admin.pem \ --client-key=admin-key.pem \ --embed-certs=true \ --kubeconfig=admin.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=admin \ --kubeconfig=admin.kubeconfig kubectl config use-context default --kubeconfig=admin.kubeconfig
6. etcd 部署
6.1 安装etcd
tar xf etcd-v3.5.5-linux-amd64.tar.gz cp etcd-v3.5.5-linux-amd64/etcd* /usr/bin/ rm -rf etcd-v3.5.5-linux-amd64
创建配置文件
cat > /etc/etcd/etcd.config.yml <<eof name: 'etcd-1' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://10.66.6.2:2380' listen-client-urls: 'https://10.66.6.2:2379,https://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://10.66.6.2:2380' advertise-client-urls: 'https://10.66.6.2:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'etcd-1=https://10.66.6.2:2380' #配置etcd节点根据自己情况 initial-cluster-token: 'etcd-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/etcd/ssl/etcd-server.pem' key-file: '/etc/etcd/ssl/etcd-server-key.pem' client-cert-auth: true trusted-ca-file: '/etc/etcd/ssl/ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/etcd/ssl/etcd-server.pem' key-file: '/etc/etcd/ssl/etcd-server-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/etcd/ssl/ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false eof
生成启动文件
cat > /usr/lib/systemd/system/etcd.service << eof [unit] description=etcd service documentation=https://coreos.com/etcd/docs/latest/ after=network.target [service] type=notify execstart=/usr/bin/etcd --config-file=/etc/etcd/etcd.config.yml restart=on-failure restartsec=10 limitnofile=65536 [install] wantedby=multi-user.target alias=etcd3.service eof
systemctl enable --now etcd
6.2 配置etcdctl 客户端工具
#设置全局变量 cat > /etc/profile.d/etcdctl.sh <<eof #!/bin/bash export etcdctl_api=3 export etcdctl_endpoints=https://127.0.0.1:2379 export etcdctl_cacert=/etc/etcd/ssl/ca.pem export etcdctl_cert=/etc/etcd/ssl/etcd-client.pem export etcdctl_key=/etc/etcd/ssl/etcd-client-key.pem eof #生效 source /etc/profile #验证集群状态 etcdctl member list
7. 部署kubernetes
分发二进制文件
tar xf kubernetes-server-linux-amd64.tar.gz #分发master组件 for i in $master;do scp kubernetes/server/bin/{kubeadm,kube-apiserver,kube-controller-manager,kube-scheduler,kube-proxy,kubelet,kubectl} $i:/usr/bin done #分发node组件 for i in $node;do scp kubernetes/server/bin/{kube-proxy,kubelet} $i:/usr/bin done
7.1 安装kube-apiserver
#创建serviceaccount key openssl genrsa -out /etc/kubernetes/pki/sa.key 2048 openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub #分发master组件 for i in $master;do scp /etc/kubernetes/pki/{sa.pub,sa.key} $i:/etc/kubernetes/pki/ done
创建service文件
a=`ifconfig eth0 | awk -rn 'nr==2{print $2}'` cat > /etc/systemd/system/kube-apiserver.service <<eof [unit] description=kubernetes api server documentation=https://github.com/kubernetes/kubernetes after=network.target [service] execstart=/usr/bin/kube-apiserver \\ --v=2 \\ --logtostderr=true \\ --allow-privileged=true \\ --bind-address=$a \\ --secure-port=6443 \\ --advertise-address=$a \\ --service-cluster-ip-range=10.200.0.0/16 \\ --service-node-port-range=30000-42767 \\ --etcd-servers=https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379 \\ --etcd-cafile=/etc/etcd/ssl/ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd-client.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-client-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/kube-apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/kube-apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/kube-apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/kube-apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=internalip,externalip,hostname \\ --enable-admission-plugins=namespacelifecycle,limitranger,serviceaccount,defaultstorageclass,defaulttolerationseconds,noderestriction,resourcequota \\ --authorization-mode=node,rbac \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=x-remote-group \\ --requestheader-extra-headers-prefix=x-remote-extra- \\ --requestheader-username-headers=x-remote-user restart=on-failure restartsec=10s limitnofile=65535 [install] wantedby=multi-user.target eof
启动服务
systemctl enable --now kube-apiserver.service
7.2 安装kube-controller-manager
#生成service文件 cat > /etc/systemd/system/kube-controller-manager.service <<eof [unit] description=kubernetes controller manager documentation=https://github.com/kubernetes/kubernetes after=network.target [service] execstart=/usr/bin/kube-controller-manager \ --v=2 \ --logtostderr=true \ --root-ca-file=/etc/kubernetes/pki/ca.pem \ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \ --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \ --leader-elect=true \ --use-service-account-credentials=true \ --node-monitor-grace-period=40s \ --node-monitor-period=5s \ --pod-eviction-timeout=2m0s \ --controllers=*,bootstrapsigner,tokencleaner \ --allocate-node-cidrs=true \ --cluster-cidr=10.200.0.0/16 \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --node-cidr-mask-size=24 restart=always restartsec=10s [install] wantedby=multi-user.target eof
#启动
#启动服务 systemctl enable --now kube-controller-manager.service
7.3 安装kube-scheduler
#生成service文件 cat > /etc/systemd/system/kube-scheduler.service <<eof [unit] description=kubernetes scheduler documentation=https://github.com/kubernetes/kubernetes after=network.target [service] execstart=/usr/bin/kube-scheduler \ --v=2 \ --logtostderr=true \ --leader-elect=true \ --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig restart=always restartsec=10s [install] wantedby=multi-user.target eof #启动服务 systemctl enable --now kube-scheduler.service
7.4 在master节点部署kubectl工具
mkdir /root/.kube/ -p cp /opt/pki/kubernetes/admin/admin.kubeconfig /root/.kube/config
验证
kubectl get cs warning: v1 componentstatus is deprecated in v1.19+ name status message error scheduler healthy ok controller-manager healthy ok etcd-0 healthy {"health":"true"}
7.5 部署kubelet
7.5.1 使用tls bootstrapping自动认证kubelet
创建tls bootstrapping认证文件
mkdir /opt/pki/kubernetes/kubelet -p cd /opt/pki/kubernetes/kubelet #生成随机认证key a=`head -c 16 /dev/urandom | od -an -t x | tr -d ' ' | head -c6` b=`head -c 16 /dev/urandom | od -an -t x | tr -d ' ' | head -c16`
生成权限绑定文件
cat > bootstrap.secret.yaml <<eof apiversion: v1 kind: secret metadata: name: bootstrap-token-$a namespace: kube-system type: bootstrap.kubernetes.io/token stringdata: description: "the default bootstrap token generated by 'kubelet '." token-id: $a token-secret: $b usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress --- apiversion: rbac.authorization.k8s.io/v1 kind: clusterrolebinding metadata: name: kubelet-bootstrap roleref: apigroup: rbac.authorization.k8s.io kind: clusterrole name: system:node-bootstrapper subjects: - apigroup: rbac.authorization.k8s.io kind: group name: system:bootstrappers:default-node-token --- apiversion: rbac.authorization.k8s.io/v1 kind: clusterrolebinding metadata: name: node-autoapprove-bootstrap roleref: apigroup: rbac.authorization.k8s.io kind: clusterrole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient subjects: - apigroup: rbac.authorization.k8s.io kind: group name: system:bootstrappers:default-node-token --- apiversion: rbac.authorization.k8s.io/v1 kind: clusterrolebinding metadata: name: node-autoapprove-certificate-rotation roleref: apigroup: rbac.authorization.k8s.io kind: clusterrole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient subjects: - apigroup: rbac.authorization.k8s.io kind: group name: system:nodes --- apiversion: rbac.authorization.k8s.io/v1 kind: clusterrole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubelet rules: - apigroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*" --- apiversion: rbac.authorization.k8s.io/v1 kind: clusterrolebinding metadata: name: system:kube-apiserver namespace: "" roleref: apigroup: rbac.authorization.k8s.io kind: clusterrole name: system:kube-apiserver-to-kubelet subjects: - apigroup: rbac.authorization.k8s.io kind: user name: kube-apiserver eof
生成配置文件
#生成配置文件 kubectl config set-cluster kubernetes \ --certificate-authority=../ca/ca.pem \ --embed-certs=true \ --server=https://10.66.6.2:6443 \ --kubeconfig=bootstrap-kubelet.kubeconfig kubectl config set-credentials tls-bootstrap-token-user \ --token=$a.$b \ --kubeconfig=bootstrap-kubelet.kubeconfig kubectl config set-context tls-bootstrap-token-user@kubernetes \ --cluster=kubernetes \ --user=tls-bootstrap-token-user \ --kubeconfig=bootstrap-kubelet.kubeconfig kubectl config use-context tls-bootstrap-token-user@kubernetes \ --kubeconfig=bootstrap-kubelet.kubeconfig #创建权限 kubectl apply -f bootstrap.secret.yaml
分发认证文件
for i in $node;do ssh $i "mkdir /etc/kubernetes -p" scp /opt/pki/kubernetes/kubelet/bootstrap-kubelet.kubeconfig $i:/etc/kubernetes done
7.5.2 部署kubernetes 组件
使用docker容器运行方式
mkdir /etc/systemd/system/kubelet.service.d/ -p mkdir /etc/kubernetes/manifests/ -p
生成service文件
cat > /usr/lib/systemd/system/kubelet.service << eof [unit] description=kubernetes kubelet documentation=https://github.com/kubernetes/kubernetes after=docker.service requires=docker.service [service] execstart=/usr/bin/kubelet restart=always startlimitinterval=0 restartsec=10 [install] wantedby=multi-user.target eof
生成service 配置文件
cat > /usr/lib/systemd/system/kubelet.service.d/10-kubelet.conf << environment="kubelet_kubeconfig_args=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig" environment="kubelet_system_args=--hostname-override=10.66.6.2" environment="kubelet_rintime=--container-runtime=remote --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock" environment="kubelet_config_args=--config=/etc/kubernetes/kubelet-conf.yml" environment="kubelet_extra_args=--node-labels=node.kubernetes.io/node=''" execstart= execstart=/usr/bin/kubelet $kubelet_kubeconfig_args $kubelet_config_args $kubelet_system_args $kubelet_extra_args $kubelet_rintime eof
使用container的方式部署
a=`ifconfig eth0 | awk -rn 'nr==2{print $2}'` mkdir /etc/systemd/system/kubelet.service.d/ -p mkdir /etc/kubernetes/manifests/ -p #生成service文件 cat > /etc/systemd/system/kubelet.service <<eof [unit] description=kubernetes kubelet documentation=https://github.com/kubernetes/kubernetes after=containerd.service requires=containerd.service [service] execstart=/usr/bin/kubelet restart=always startlimitinterval=0 restartsec=10 [install] wantedby=multi-user.target eof #生成service配置文件 cat > /etc/systemd/system/kubelet.service.d/10-kubelet.conf <<eof [service] environment="kubelet_kubeconfig_args=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig" environment="kubelet_system_args=--hostname-override=$a" environment="kubelet_rintime=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock" environment="kubelet_config_args=--config=/etc/kubernetes/kubelet-conf.yml" environment="kubelet_extra_args=--node-labels=node.kubernetes.io/node='' " execstart= execstart=/usr/bin/kubelet \$kubelet_kubeconfig_args \$kubelet_config_args \$kubelet_system_args \$kubelet_extra_args \$kubelet_rintime eof
kubelet配置文件生成
a=`ifconfig eth0 | awk -rn 'nr==2{print $2}'` #生成配置文件 cat > /etc/kubernetes/kubelet-conf.yml <<eof apiversion: kubelet.config.k8s.io/v1beta1 kind: kubeletconfiguration address: $a port: 10250 readonlyport: 10255 authentication: anonymous: enabled: false webhook: cachettl: 2m0s enabled: true x509: clientcafile: /etc/kubernetes/pki/ca.pem authorization: mode: webhook webhook: cacheauthorizedttl: 5m0s cacheunauthorizedttl: 30s cgroupdriver: systemd cgroupsperqos: true clusterdns: - 10.200.0.2 clusterdomain: cluster.local containerlogmaxfiles: 5 containerlogmaxsize: 10mi contenttype: application/vnd.kubernetes.protobuf cpucfsquota: true cpumanagerpolicy: none cpumanagerreconcileperiod: 10s enablecontrollerattachdetach: true enabledebugginghandlers: true enforcenodeallocatable: - pods eventburst: 10 eventrecordqps: 5 evictionhard: imagefs.available: 15% memory.available: 100mi nodefs.available: 10% nodefs.inodesfree: 5% evictionpressuretransitionperiod: 5m0s failswapon: true filecheckfrequency: 20s hairpinmode: promiscuous-bridge healthzbindaddress: 127.0.0.1 healthzport: 10248 httpcheckfrequency: 20s imagegchighthresholdpercent: 85 imagegclowthresholdpercent: 80 imageminimumgcage: 2m0s iptablesdropbit: 15 iptablesmasqueradebit: 14 kubeapiburst: 10 kubeapiqps: 5 makeiptablesutilchains: true maxopenfiles: 1000000 maxpods: 110 nodestatusupdatefrequency: 10s oomscoreadj: -999 podpidslimit: -1 registryburst: 10 registrypullqps: 5 resolvconf: /etc/resolv.conf rotatecertificates: true runtimerequesttimeout: 2m0s serializeimagepulls: true staticpodpath: /etc/kubernetes/manifests streamingconnectionidletimeout: 4h0m0s syncfrequency: 1m0s volumestatsaggperiod: 1m0s eof
启动服务
systemctl enable --now kubelet.service
7.6 部署kube-proxy
mkdir /opt/pki/kubernetes/kube-proxy/ -p cd /opt/pki/kubernetes/kube-proxy/
生成配置文件
kubectl -n kube-system create serviceaccount kube-proxy kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy cat >kube-proxy-scret.yml<<eof apiversion: v1 kind: secret metadata: name: kube-proxy namespace: kube-system annotations: kubernetes.io/service-account.name: "kube-proxy" type: kubernetes.io/service-account-token eof
kubectl apply -f kube-proxy-scret.yml jwt_token=$(kubectl -n kube-system get secret/kube-proxy \ --output=jsonpath='{.data.token}' | base64 -d) kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://10.66.6.2:6443 \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kubernetes \ --token=${jwt_token} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=kubernetes \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context kubernetes \ --kubeconfig=kube-proxy.kubeconfig
拷贝文件到节点
for i in $node;do scp /opt/pki/kubernetes/kube-proxy/kube-proxy.kubeconfig $i:/etc/kubernetes done
生成service文件
cat > /etc/systemd/system/kube-proxy.service <<eof [unit] description=kubernetes kube proxy documentation=https://github.com/kubernetes/kubernetes after=network.target [service] execstart=/usr/bin/kube-proxy \ --config=/etc/kubernetes/kube-proxy.conf \ --v=2 restart=always restartsec=10s [install] wantedby=multi-user.target eof
生成配置文件
cat > /etc/kubernetes/kube-proxy.yaml << eof apiversion: kubeproxy.config.k8s.io/v1alpha1 bindaddress: 10.66.6.2 clientconnection: acceptcontenttypes: "" burst: 10 contenttype: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig qps: 5 clustercidr: 10.100.0.0/16 configsyncperiod: 15m0s conntrack: max: null maxpercore: 32768 min: 131072 tcpclosewaittimeout: 1h0m0s tcpestablishedtimeout: 24h0m0s enableprofiling: false healthzbindaddress: 0.0.0.0:10256 hostnameoverride: "10.66.6.2" iptables: masqueradeall: false masqueradebit: 14 minsyncperiod: 0s syncperiod: 30s ipvs: masqueradeall: true minsyncperiod: 5s scheduler: "rr" syncperiod: 30s kind: kubeproxyconfiguration metricsbindaddress: 127.0.0.1:10249 mode: "ipvs" nodeportaddresses: null oomscoreadj: -999 portrange: "" udpidletimeout: 250ms eof
启动服务
systemctl enable --now kube-proxy.service
验证工作模式
curl 127.0.0.1:10249/proxymode
8. 安装组件
8.1 安装calico网络插件
yaml文件下载
https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/calico-typha.yaml
修改如下:
- name: calico_ipv4pool_cidr value: "10.100.0.0/16" kubectl apply -f calico-typha.yaml #验证 kubectl get node
8.2 安装calicoctl客户端
mkdir /etc/calico -p cat >/etc/calico/calicoctl.cfg <<eof apiversion: projectcalico.org/v3 kind: calicoapiconfig metadata: spec: datastoretype: "kubernetes" kubeconfig: "/root/.kube/config" eof #验证 calicoctl node status
8.3 安装dashboard
地址:
https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
修改yaml文件
kind: service apiversion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: type: nodeport #添加 ports: - port: 443 targetport: 8443 nodeport: 30001 #添加 selector: k8s-app: kubernetes-dashboard #创建 kubectl apply -f dashboard.yaml
创建用户
cat >admin.yaml<<eof apiversion: v1 kind: serviceaccount metadata: name: admin-user namespace: kubernetes-dashboard --- apiversion: v1 kind: secret metadata: name: admin-user namespace: kubernetes-dashboard annotations: kubernetes.io/service-account.name: "admin-user" type: kubernetes.io/service-account-token --- apiversion: rbac.authorization.k8s.io/v1 kind: clusterrolebinding metadata: name: admin-user roleref: apigroup: rbac.authorization.k8s.io kind: clusterrole name: cluster-admin subjects: - kind: serviceaccount name: admin-user namespace: kubernetes-dashboard eof
创建用户
kubectl apply -f admin.yaml #获取用户token kubectl describe secrets -n kubernetes-dashboard admin-user
8.4 安装mertrics-server
下载地址:
https://github.com/kubernetes-sigs/metrics-server/
拷贝证书文件
for i in $node;do scp /opt/pki/proxy-client/front-proxy-ca.pem $i:/etc/kubernetes/pki/ done
修改配置
- --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=internalip,externalip,hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem - --requestheader-username-headers=x-remote-user - --requestheader-group-headers=x-remote-group - --requestheader-extra-headers-prefix=x-remote-extra- volumemounts: - mountpath: /tmp name: tmp-dir - mountpath: /etc/kubernetes/pki name: ca-ssl volumes: - emptydir: {} name: tmp-dir - name: ca-ssl hostpath: path: /etc/kubernetes/pki
kubectl apply -f components.yaml #验证 kubectl top node
总结
以上为个人经验,希望能给大家一个参考,也希望大家多多支持代码网。
发表评论