经验首页 前端设计 程序设计 Java相关 移动开发 数据库/运维 软件/图像 大数据/云计算 其他经验
当前位置:技术经验 » 数据库/运维 » Kubernetes » 查看文章
K8s高可用集群二进制部署-V1.20
来源:cnblogs  作者:二价亚铁  时间:2024/6/25 8:51:31  对本文有异议

一、前置知识点

1.1 生产环境部署K8s集群的两种方式

kubeadm

Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。

二进制包

从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。

小结:Kubeadm降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。

1.2准备环境

服务器要求:

  • 建议最小硬件配置:2核CPU、2G内存、30G硬盘
  • 服务器最好可以访问外网,会有从网上拉取镜像需求,如果服务器不能上网,需要提前下载对应镜像并导入节点

软件环境:

软件 版本
操作系统 CentOS7.x_x64 (mini)
容器引擎 Docker CE 19
Kubernetes Kubernetes v1.20

服务器整体规划:

角色 IP 组件
k8s-master1 192.168.31.71 kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,etcd, nginx,keepalived
k8s-master2 192.168.31.74 kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker, nginx,keepalived
k8s-node1 192.168.31.72 kubelet,kube-proxy,docker,etcd
k8s-node2 192.168.31.73 kubelet,kube-proxy,docker,etcd
负载均衡器IP 192.168.31.88 (VIP)

须知:考虑到有些朋友电脑配置较低,一次性开四台机器会跑不动,所以搭建这套K8s高可用集群分两部分实施,先部署一套单Master架构(3台),再扩容为多Master架构(4台或6台),顺便再熟悉下Master扩容流程。

单Master架构图:

image

单Master服务器规划:

角色 IP 组件
k8s-master 192.168.31.71 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-node1 192.168.31.72 kubelet,kube-proxy,docker,etcd
k8s-node2 192.168.31.73 kubelet,kube-proxy,docker,etcd

1.3 操作系统初始化配置

  1. # 关闭防火墙
  2. systemctl stop firewalld
  3. systemctl disable firewalld
  4. # 关闭selinux
  5. sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
  6. setenforce 0 # 临时
  7. # 关闭swap
  8. swapoff -a # 临时
  9. sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
  10. # 根据规划设置主机名
  11. hostnamectl set-hostname <hostname>
  12. # 在master添加hosts
  13. cat >> /etc/hosts << EOF
  14. 192.168.31.71 k8s-master1
  15. 192.168.31.72 k8s-node1
  16. 192.168.31.73 k8s-node2
  17. EOF
  18. # 将桥接的IPv4流量传递到iptables的链
  19. cat > /etc/sysctl.d/k8s.conf << EOF
  20. net.bridge.bridge-nf-call-ip6tables = 1
  21. net.bridge.bridge-nf-call-iptables = 1
  22. EOF
  23. sysctl --system # 生效
  24. # 时间同步
  25. yum install ntpdate -y
  26. ntpdate time.windows.com

二、部署Etcd集群

Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当然,你也可以使用5台组建集群,可容忍2台机器故障。

节点名称 IP
etcd-1 192.168.31.71
etcd-2 192.168.31.72
etcd-3 192.168.31.73

注:为了节省机器,这里与K8s节点机器复用。也可以独立于k8s集群之外部署,只要apiserver能连接到就行。|

2.1 准备cfssl证书生成工具

cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。

找任意一台服务器操作,这里用Master节点。

  1. wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
  2. wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
  3. wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
  4. chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
  5. mv cfssl_linux-amd64 /usr/local/bin/cfssl
  6. mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
  7. mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2.2 生成Etcd证书

1. 自签证书颁发机构(CA)

创建工作目录:

  1. mkdir -p ~/TLS/{etcd,k8s}
  2. cd ~/TLS/etcd

自签CA:

  1. cat > ca-config.json << EOF
  2. {
  3. "signing": {
  4. "default": {
  5. "expiry": "87600h"
  6. },
  7. "profiles": {
  8. "www": {
  9. "expiry": "87600h",
  10. "usages": [
  11. "signing",
  12. "key encipherment",
  13. "server auth",
  14. "client auth"
  15. ]
  16. }
  17. }
  18. }
  19. }
  20. EOF
  21. cat > ca-csr.json << EOF
  22. {
  23. "CN": "etcd CA",
  24. "key": {
  25. "algo": "rsa",
  26. "size": 2048
  27. },
  28. "names": [
  29. {
  30. "C": "CN",
  31. "L": "Beijing",
  32. "ST": "Beijing"
  33. }
  34. ]
  35. }
  36. EOF

生成证书:

  1. cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

会生成ca.pem和ca-key.pem文件。


2. 使用自签CA签发Etcd HTTPS证书

创建证书申请文件:

  1. cat > server-csr.json << EOF
  2. {
  3. "CN": "etcd",
  4. "hosts": [
  5. "192.168.31.71",
  6. "192.168.31.72",
  7. "192.168.31.73"
  8. ],
  9. "key": {
  10. "algo": "rsa",
  11. "size": 2048
  12. },
  13. "names": [
  14. {
  15. "C": "CN",
  16. "L": "BeiJing",
  17. "ST": "BeiJing"
  18. }
  19. ]
  20. }
  21. EOF

注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。

生成证书:

  1. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

会生成server.pem和server-key.pem文件。


2.3 从Github下载二进制文件

下载地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

2.4 部署Etcd集群

以下在节点1上操作,为简化操作,待会将节点1生成的所有文件拷贝到节点2和节点3.

1. 创建工作目录并解压二进制包

  1. mkdir /opt/etcd/{bin,cfg,ssl} -p
  2. tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
  3. mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

2. 创建etcd配置文件

  1. cat > /opt/etcd/cfg/etcd.conf << EOF
  2. #[Member]
  3. ETCD_NAME="etcd-1"
  4. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  5. ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380"
  6. ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379"
  7. #[Clustering]
  8. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380"
  9. ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379"
  10. ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"
  11. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
  12. ETCD_INITIAL_CLUSTER_STATE="new"
  13. EOF

  • ETCD_NAME:节点名称,集群中唯一

  • ETCD_DATA_DIR:数据目录

  • ETCD_LISTEN_PEER_URLS:集群通信监听地址

  • ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址

  • ETCD_INITIAL_ADVERTISE_PEERURLS:集群通告地址

  • ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址

  • ETCD_INITIAL_CLUSTER:集群节点地址

  • ETCD_INITIALCLUSTER_TOKEN:集群Token

  • ETCD_INITIALCLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

3. systemd管理etcd

  1. cat > /usr/lib/systemd/system/etcd.service << EOF
  2. [Unit]
  3. Description=Etcd Server
  4. After=network.target
  5. After=network-online.target
  6. Wants=network-online.target
  7. [Service]
  8. Type=notify
  9. EnvironmentFile=/opt/etcd/cfg/etcd.conf
  10. ExecStart=/opt/etcd/bin/etcd --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem --logger=zap
  11. Restart=on-failure
  12. LimitNOFILE=65536
  13. [Install]
  14. WantedBy=multi-user.target
  15. EOF

4. 拷贝刚才生成的证书

把刚才生成的证书拷贝到配置文件中的路径:

  1. cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/

5. 启动并设置开机启动

  1. systemctl daemon-reload
  2. systemctl start etcd
  3. systemctl enable etcd

6. 将上面节点1所有生成的文件拷贝到节点2和节点3

  1. scp -r /opt/etcd/ root@192.168.31.72:/opt/
  2. scp /usr/lib/systemd/system/etcd.service root@192.168.31.72:/usr/lib/systemd/system/
  3. scp -r /opt/etcd/ root@192.168.31.73:/opt/
  4. scp /usr/lib/systemd/system/etcd.service root@192.168.31.73:/usr/lib/systemd/system/

然后在节点2和节点3分别修改etcd.conf配置文件中的节点名称和当前服务器IP:

  1. vi /opt/etcd/cfg/etcd.conf
  2. #[Member]
  3. ETCD_NAME="etcd-1" # 修改此处,节点2改为etcd-2,节点3改为etcd-3
  4. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  5. ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380" # 修改此处为当前服务器IP
  6. ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379" # 修改此处为当前服务器IP
  7. #[Clustering]
  8. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380" # 修改此处为当前服务器IP
  9. ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379" # 修改此处为当前服务器IP
  10. ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"
  11. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
  12. ETCD_INITIAL_CLUSTER_STATE="new"

最后启动etcd并设置开机启动,同上。


7. 查看集群状态

  1. ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.31.71:2379,https://192.168.31.72:2379,https://192.168.31.73:2379" endpoint health --write-out=table
  2. +----------------------------+--------+-------------+-------+
  3. | ENDPOINT | HEALTH | TOOK | ERROR |
  4. +----------------------------+--------+-------------+-------+
  5. | https://192.168.31.71:2379 | true | 10.301506ms | |
  6. | https://192.168.31.73:2379 | true | 12.87467ms | |
  7. | https://192.168.31.72:2379 | true | 13.225954ms | |
  8. +----------------------------+--------+-------------+-------+

如果输出上面信息,就说明集群部署成功。

如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd


三、安装Docker

这里使用Docker作为容器引擎,也可以换成别的,例如containerd

下载地址:https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz

以下在所有节点操作。这里采用二进制安装,用yum安装也一样。

3.1 解压二进制包

  1. tar zxvf docker-19.03.9.tgz
  2. mv docker/* /usr/bin

3.2 systemd管理docker

  1. cat > /usr/lib/systemd/system/docker.service << EOF
  2. [Unit]
  3. Description=Docker Application Container Engine
  4. Documentation=https://docs.docker.com
  5. After=network-online.target firewalld.service
  6. Wants=network-online.target
  7. [Service]
  8. Type=notify
  9. ExecStart=/usr/bin/dockerd
  10. ExecReload=/bin/kill -s HUP $MAINPID
  11. LimitNOFILE=infinity
  12. LimitNPROC=infinity
  13. LimitCORE=infinity
  14. TimeoutStartSec=0
  15. Delegate=yes
  16. KillMode=process
  17. Restart=on-failure
  18. StartLimitBurst=3
  19. StartLimitInterval=60s
  20. [Install]
  21. WantedBy=multi-user.target
  22. EOF

3.3 创建配置文件

  1. mkdir /etc/docker
  2. cat > /etc/docker/daemon.json << EOF
  3. {
  4. "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
  5. }
  6. EOF
  • registry-mirrors 阿里云镜像加速器

3.4 启动并设置开机启动

  1. systemctl daemon-reload
  2. systemctl start docker
  3. systemctl enable docker

四、部署Master Node

如果你在学习中遇到问题或者文档有误可联系wsj~ 微信: XWALY-5

4.1 生成kube-apiserver证书

1. 自签证书颁发机构(CA)

  1. cd ~/TLS/k8s
  2. cat > ca-config.json << EOF
  3. {
  4. "signing": {
  5. "default": {
  6. "expiry": "87600h"
  7. },
  8. "profiles": {
  9. "kubernetes": {
  10. "expiry": "87600h",
  11. "usages": [
  12. "signing",
  13. "key encipherment",
  14. "server auth",
  15. "client auth"
  16. ]
  17. }
  18. }
  19. }
  20. }
  21. EOF
  22. cat > ca-csr.json << EOF
  23. {
  24. "CN": "kubernetes",
  25. "key": {
  26. "algo": "rsa",
  27. "size": 2048
  28. },
  29. "names": [
  30. {
  31. "C": "CN",
  32. "L": "Beijing",
  33. "ST": "Beijing",
  34. "O": "k8s",
  35. "OU": "System"
  36. }
  37. ]
  38. }
  39. EOF

生成证书:

  1. cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

会生成ca.pem和ca-key.pem文件。

2. 使用自签CA签发kube-apiserver HTTPS证书

创建证书申请文件:

  1. cat > server-csr.json << EOF
  2. {
  3. "CN": "kubernetes",
  4. "hosts": [
  5. "10.0.0.1",
  6. "127.0.0.1",
  7. "192.168.31.71",
  8. "192.168.31.72",
  9. "192.168.31.73",
  10. "192.168.31.88",
  11. "kubernetes",
  12. "kubernetes.default",
  13. "kubernetes.default.svc",
  14. "kubernetes.default.svc.cluster",
  15. "kubernetes.default.svc.cluster.local"
  16. ],
  17. "key": {
  18. "algo": "rsa",
  19. "size": 2048
  20. },
  21. "names": [
  22. {
  23. "C": "CN",
  24. "L": "BeiJing",
  25. "ST": "BeiJing",
  26. "O": "k8s",
  27. "OU": "System"
  28. }
  29. ]
  30. }
  31. EOF

注:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。

生成证书:

  1. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

会生成server.pem和server-key.pem文件。

4.2 从Github下载二进制文件

下载地址: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md

注:打开链接你会发现里面有很多包,下载一个server包就够了,包含了Master和Worker Node二进制文件。

4.3 解压二进制包

  1. mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
  2. tar zxvf kubernetes-server-linux-amd64.tar.gz
  3. cd kubernetes/server/bin
  4. cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
  5. cp kubectl /usr/bin/

4.4 部署kube-apiserver

1. 创建配置文件

  1. cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
  2. KUBE_APISERVER_OPTS="--logtostderr=false \--v=2 \--log-dir=/opt/kubernetes/logs \--etcd-servers=https://192.168.31.71:2379,https://192.168.31.72:2379,https://192.168.31.73:2379 \--bind-address=192.168.31.71 \--secure-port=6443 \--advertise-address=192.168.31.71 \--allow-privileged=true \--service-cluster-ip-range=10.0.0.0/24 \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \--authorization-mode=RBAC,Node \--enable-bootstrap-token-auth=true \--token-auth-file=/opt/kubernetes/cfg/token.csv \--service-node-port-range=30000-32767 \--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \--tls-cert-file=/opt/kubernetes/ssl/server.pem \--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \--client-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \--service-account-issuer=api \--service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem \--etcd-cafile=/opt/etcd/ssl/ca.pem \--etcd-certfile=/opt/etcd/ssl/server.pem \--etcd-keyfile=/opt/etcd/ssl/server-key.pem \--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \--requestheader-allowed-names=kubernetes \--requestheader-extra-headers-prefix=X-Remote-Extra- \--requestheader-group-headers=X-Remote-Group \--requestheader-username-headers=X-Remote-User \--enable-aggregator-routing=true \--audit-log-maxage=30 \--audit-log-maxbackup=3 \--audit-log-maxsize=100 \--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
  3. EOF

注:上面两个\ \ 第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。

  • —logtostderr:启用日志

  • —-v:日志等级

  • —log-dir:日志目录

  • —etcd-servers:etcd集群地址

  • —bind-address:监听地址

  • —secure-port:https安全端口

  • —advertise-address:集群通告地址

  • —allow-privileged:启用授权

  • —service-cluster-ip-range:Service虚拟IP地址段

  • —enable-admission-plugins:准入控制模块

  • —authorization-mode:认证授权,启用RBAC授权和节点自管理

  • —enable-bootstrap-token-auth:启用TLS bootstrap机制

  • —token-auth-file:bootstrap token文件

  • —service-node-port-range:Service nodeport类型默认分配端口范围

  • —kubelet-client-xxx:apiserver访问kubelet客户端证书

  • —tls-xxx-file:apiserver https证书

  • 1.20版本必须加的参数:—service-account-issuer,—service-account-signing-key-file

  • —etcd-xxxfile:连接Etcd集群证书

  • —audit-log-xxx:审计日志

  • 启动聚合层相关配置:—requestheader-client-ca-file,—proxy-client-cert-file,—proxy-client-key-file,—requestheader-allowed-names,—requestheader-extra-headers-prefix,—requestheader-group-headers,—requestheader-username-headers,—enable-aggregator-routing

2. 拷贝刚才生成的证书

把刚才生成的证书拷贝到配置文件中的路径:

  1. cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/

3. 启用 TLS Bootstrapping 机制

TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。

TLS bootstraping 工作流程:
image

创建上述配置文件中token文件:

  1. cat > /opt/kubernetes/cfg/token.csv << EOF
  2. c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
  3. EOF

格式:token,用户名,UID,用户组

token也可自行生成替换:

  1. head -c 16 /dev/urandom | od -An -t x | tr -d ' '

4. systemd管理apiserver

  1. cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
  2. [Unit]
  3. Description=Kubernetes API Server
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. [Service]
  6. EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
  7. ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
  8. Restart=on-failure
  9. [Install]
  10. WantedBy=multi-user.target
  11. EOF

5. 启动并设置开机启动

  1. systemctl daemon-reload
  2. systemctl start kube-apiserver
  3. systemctl enable kube-apiserver

4.5 部署kube-controller-manager

1. 创建配置文件

  1. cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
  2. KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \--v=2 \--log-dir=/opt/kubernetes/logs \--leader-elect=true \--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \--bind-address=127.0.0.1 \--allocate-node-cidrs=true \--cluster-cidr=10.244.0.0/16 \--service-cluster-ip-range=10.0.0.0/24 \--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \--root-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \--cluster-signing-duration=87600h0m0s"
  3. EOF
  • —kubeconfig:连接apiserver配置文件

  • —leader-elect:当该组件启动多个时,自动选举(HA)

  • —cluster-signing-cert-file/—cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致

2. 生成kubeconfig文件

生成kube-controller-manager证书:

  1. # 切换工作目录
  2. cd ~/TLS/k8s
  3. # 创建证书请求文件
  4. cat > kube-controller-manager-csr.json << EOF
  5. {
  6. "CN": "system:kube-controller-manager",
  7. "hosts": [],
  8. "key": {
  9. "algo": "rsa",
  10. "size": 2048
  11. },
  12. "names": [
  13. {
  14. "C": "CN",
  15. "L": "BeiJing",
  16. "ST": "BeiJing",
  17. "O": "system:masters",
  18. "OU": "System"
  19. }
  20. ]
  21. }
  22. EOF
  23. # 生成证书
  24. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

生成kubeconfig文件(以下是shell命令,直接在终端执行):

  1. KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"
  2. KUBE_APISERVER="https://192.168.31.71:6443"
  3. kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=${KUBE_CONFIG}
  4. kubectl config set-credentials kube-controller-manager --client-certificate=./kube-controller-manager.pem --client-key=./kube-controller-manager-key.pem --embed-certs=true --kubeconfig=${KUBE_CONFIG}
  5. kubectl config set-context default --cluster=kubernetes --user=kube-controller-manager --kubeconfig=${KUBE_CONFIG}
  6. kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

3. systemd管理controller-manager

  1. cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
  2. [Unit]
  3. Description=Kubernetes Controller Manager
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. [Service]
  6. EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
  7. ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
  8. Restart=on-failure
  9. [Install]
  10. WantedBy=multi-user.target
  11. EOF

4. 启动并设置开机启动

  1. systemctl daemon-reload
  2. systemctl start kube-controller-manager
  3. systemctl enable kube-controller-manager

4.6 部署kube-scheduler

1. 创建配置文件

  1. cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
  2. KUBE_SCHEDULER_OPTS="--logtostderr=false \--v=2 \--log-dir=/opt/kubernetes/logs \--leader-elect \--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \--bind-address=127.0.0.1"
  3. EOF
  • —kubeconfig:连接apiserver配置文件

  • —leader-elect:当该组件启动多个时,自动选举(HA)

2. 生成kubeconfig文件

生成kube-scheduler证书:

  1. # 切换工作目录
  2. cd ~/TLS/k8s
  3. # 创建证书请求文件
  4. cat > kube-scheduler-csr.json << EOF
  5. {
  6. "CN": "system:kube-scheduler",
  7. "hosts": [],
  8. "key": {
  9. "algo": "rsa",
  10. "size": 2048
  11. },
  12. "names": [
  13. {
  14. "C": "CN",
  15. "L": "BeiJing",
  16. "ST": "BeiJing",
  17. "O": "system:masters",
  18. "OU": "System"
  19. }
  20. ]
  21. }
  22. EOF
  23. # 生成证书
  24. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

生成kubeconfig文件(以下是shell命令,直接在终端执行):

  1. KUBE_CONFIG="/opt/kubernetes/cfg/kube-scheduler.kubeconfig"
  2. KUBE_APISERVER="https://192.168.31.71:6443"
  3. kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=${KUBE_CONFIG}
  4. kubectl config set-credentials kube-scheduler --client-certificate=./kube-scheduler.pem --client-key=./kube-scheduler-key.pem --embed-certs=true --kubeconfig=${KUBE_CONFIG}
  5. kubectl config set-context default --cluster=kubernetes --user=kube-scheduler --kubeconfig=${KUBE_CONFIG}
  6. kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

3. systemd管理scheduler

  1. cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
  2. [Unit]
  3. Description=Kubernetes Scheduler
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. [Service]
  6. EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
  7. ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
  8. Restart=on-failure
  9. [Install]
  10. WantedBy=multi-user.target
  11. EOF

4. 启动并设置开机启动

  1. systemctl daemon-reload
  2. systemctl start kube-scheduler
  3. systemctl enable kube-scheduler
5. 查看集群状态

生成kubectl连接集群的证书:

  1. cat > admin-csr.json <<EOF
  2. {
  3. "CN": "admin",
  4. "hosts": [],
  5. "key": {
  6. "algo": "rsa",
  7. "size": 2048
  8. },
  9. "names": [
  10. {
  11. "C": "CN",
  12. "L": "BeiJing",
  13. "ST": "BeiJing",
  14. "O": "system:masters",
  15. "OU": "System"
  16. }
  17. ]
  18. }
  19. EOF
  20. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

生成kubeconfig文件:

  1. mkdir /root/.kube
  2. KUBE_CONFIG="/root/.kube/config"
  3. KUBE_APISERVER="https://192.168.31.71:6443"
  4. kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=${KUBE_CONFIG}
  5. kubectl config set-credentials cluster-admin --client-certificate=./admin.pem --client-key=./admin-key.pem --embed-certs=true --kubeconfig=${KUBE_CONFIG}
  6. kubectl config set-context default --cluster=kubernetes --user=cluster-admin --kubeconfig=${KUBE_CONFIG}
  7. kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

通过kubectl工具查看当前集群组件状态:

  1. kubectl get cs
  2. NAME STATUS MESSAGE ERROR
  3. scheduler Healthy ok
  4. controller-manager Healthy ok
  5. etcd-2 Healthy {"health":"true"}
  6. etcd-1 Healthy {"health":"true"}
  7. etcd-0 Healthy {"health":"true"}

如上输出说明Master节点组件运行正常。

6. 授权kubelet-bootstrap用户允许请求证书

  1. kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

五、部署Worker Node

如果你在学习中遇到问题或者文档有误可联系阿良~ 微信: xyz12366699

下面还是在Master Node上操作,即同时作为Worker Node

5.1 创建工作目录并拷贝二进制文件

在所有worker node创建工作目录:

  1. mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

从master节点拷贝:

  1. cd kubernetes/server/bin
  2. cp kubelet kube-proxy /opt/kubernetes/bin # 本地拷贝

5.2 部署kubelet

1. 创建配置文件

  1. cat > /opt/kubernetes/cfg/kubelet.conf << EOF
  2. KUBELET_OPTS="--logtostderr=false \--v=2 \--log-dir=/opt/kubernetes/logs \--hostname-override=k8s-master1 \--network-plugin=cni \--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \--config=/opt/kubernetes/cfg/kubelet-config.yml \--cert-dir=/opt/kubernetes/ssl \--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
  3. EOF
  • —hostname-override:显示名称,集群中唯一

  • —network-plugin:启用CNI

  • —kubeconfig:空路径,会自动生成,后面用于连接apiserver

  • —bootstrap-kubeconfig:首次启动向apiserver申请证书

  • —config:配置参数文件

  • —cert-dir:kubelet证书生成目录

  • —pod-infra-container-image:管理Pod网络容器的镜像

2. 配置参数文件

  1. cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
  2. kind: KubeletConfiguration
  3. apiVersion: kubelet.config.k8s.io/v1beta1
  4. address: 0.0.0.0
  5. port: 10250
  6. readOnlyPort: 10255
  7. cgroupDriver: cgroupfs
  8. clusterDNS:
  9. - 10.0.0.2
  10. clusterDomain: cluster.local
  11. failSwapOn: false
  12. authentication:
  13. anonymous:
  14. enabled: false
  15. webhook:
  16. cacheTTL: 2m0s
  17. enabled: true
  18. x509:
  19. clientCAFile: /opt/kubernetes/ssl/ca.pem
  20. authorization:
  21. mode: Webhook
  22. webhook:
  23. cacheAuthorizedTTL: 5m0s
  24. cacheUnauthorizedTTL: 30s
  25. evictionHard:
  26. imagefs.available: 15%
  27. memory.available: 100Mi
  28. nodefs.available: 10%
  29. nodefs.inodesFree: 5%
  30. maxOpenFiles: 1000000
  31. maxPods: 110
  32. EOF

3. 生成kubelet初次加入集群引导kubeconfig文件

  1. KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
  2. KUBE_APISERVER="https://192.168.31.71:6443" # apiserver IP:PORT
  3. TOKEN="c47ffb939f5ca36231d9e3121a252940" # 与token.csv里保持一致
  4. # 生成 kubelet bootstrap kubeconfig 配置文件
  5. kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=${KUBE_CONFIG}
  6. kubectl config set-credentials "kubelet-bootstrap" --token=${TOKEN} --kubeconfig=${KUBE_CONFIG}
  7. kubectl config set-context default --cluster=kubernetes --user="kubelet-bootstrap" --kubeconfig=${KUBE_CONFIG}
  8. kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

4. systemd管理kubelet

  1. cat > /usr/lib/systemd/system/kubelet.service << EOF
  2. [Unit]
  3. Description=Kubernetes Kubelet
  4. After=docker.service
  5. [Service]
  6. EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
  7. ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
  8. Restart=on-failure
  9. LimitNOFILE=65536
  10. [Install]
  11. WantedBy=multi-user.target
  12. EOF

5. 启动并设置开机启动

  1. systemctl daemon-reload
  2. systemctl start kubelet
  3. systemctl enable kubelet

5.3 批准kubelet证书申请并加入集群

  1. # 查看kubelet证书请求
  2. kubectl get csr
  3. NAME AGE SIGNERNAME REQUESTOR CONDITION
  4. node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A 6m3s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
  5. # 批准申请
  6. kubectl certificate approve node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A
  7. # 查看节点
  8. kubectl get node
  9. NAME STATUS ROLES AGE VERSION
  10. k8s-master1 NotReady <none> 7s v1.18.3

注:由于网络插件还没有部署,节点会没有准备就绪 NotReady

5.4 部署kube-proxy

1. 创建配置文件

  1. cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
  2. KUBE_PROXY_OPTS="--logtostderr=false \--v=2 \--log-dir=/opt/kubernetes/logs \--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
  3. EOF

2. 配置参数文件

  1. cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
  2. kind: KubeProxyConfiguration
  3. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  4. bindAddress: 0.0.0.0
  5. metricsBindAddress: 0.0.0.0:10249
  6. clientConnection:
  7. kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
  8. hostnameOverride: k8s-master1
  9. clusterCIDR: 10.0.0.0/24
  10. EOF

3. 生成kube-proxy.kubeconfig文件

生成kube-proxy证书:

  1. # 切换工作目录
  2. cd ~/TLS/k8s
  3. # 创建证书请求文件
  4. cat > kube-proxy-csr.json << EOF
  5. {
  6. "CN": "system:kube-proxy",
  7. "hosts": [],
  8. "key": {
  9. "algo": "rsa",
  10. "size": 2048
  11. },
  12. "names": [
  13. {
  14. "C": "CN",
  15. "L": "BeiJing",
  16. "ST": "BeiJing",
  17. "O": "k8s",
  18. "OU": "System"
  19. }
  20. ]
  21. }
  22. EOF
  23. # 生成证书
  24. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
  25. 生成kubeconfig文件:
  26. KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
  27. KUBE_APISERVER="https://192.168.31.71:6443"
  28. kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=${KUBE_CONFIG}
  29. kubectl config set-credentials kube-proxy --client-certificate=./kube-proxy.pem --client-key=./kube-proxy-key.pem --embed-certs=true --kubeconfig=${KUBE_CONFIG}
  30. kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=${KUBE_CONFIG}
  31. kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

4. systemd管理kube-proxy

  1. cat > /usr/lib/systemd/system/kube-proxy.service << EOF
  2. [Unit]
  3. Description=Kubernetes Proxy
  4. After=network.target
  5. [Service]
  6. EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
  7. ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
  8. Restart=on-failure
  9. LimitNOFILE=65536
  10. [Install]
  11. WantedBy=multi-user.target
  12. EOF

5. 启动并设置开机启动

  1. systemctl daemon-reload
  2. systemctl start kube-proxy
  3. systemctl enable kube-proxy

5.5 部署网络组件

Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。

部署Calico:

  1. kubectl apply -f calico.yaml
  2. kubectl get pods -n kube-system

等Calico Pod都Running,节点也会准备就绪:

  1. kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master Ready <none> 37m v1.20.4

5.6 授权apiserver访问kubelet

应用场景:例如kubectl logs

  1. cat > apiserver-to-kubelet-rbac.yaml << EOF
  2. apiVersion: rbac.authorization.k8s.io/v1
  3. kind: ClusterRole
  4. metadata:
  5. annotations:
  6. rbac.authorization.kubernetes.io/autoupdate: "true"
  7. labels:
  8. kubernetes.io/bootstrapping: rbac-defaults
  9. name: system:kube-apiserver-to-kubelet
  10. rules:
  11. - apiGroups:
  12. - ""
  13. resources:
  14. - nodes/proxy
  15. - nodes/stats
  16. - nodes/log
  17. - nodes/spec
  18. - nodes/metrics
  19. - pods/log
  20. verbs:
  21. - "*"
  22. ---
  23. apiVersion: rbac.authorization.k8s.io/v1
  24. kind: ClusterRoleBinding
  25. metadata:
  26. name: system:kube-apiserver
  27. namespace: ""
  28. roleRef:
  29. apiGroup: rbac.authorization.k8s.io
  30. kind: ClusterRole
  31. name: system:kube-apiserver-to-kubelet
  32. subjects:
  33. - apiGroup: rbac.authorization.k8s.io
  34. kind: User
  35. name: kubernetes
  36. EOF
  37. kubectl apply -f apiserver-to-kubelet-rbac.yaml

5.7 新增加Worker Node

1. 拷贝已部署好的Node相关文件到新节点

在Master节点将Worker Node涉及文件拷贝到新节点192.168.31.72/73

  1. scp -r /opt/kubernetes root@192.168.31.72:/opt/
  2. scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.31.72:/usr/lib/systemd/system
  3. scp /opt/kubernetes/ssl/ca.pem root@192.168.31.72:/opt/kubernetes/ssl

2. 删除kubelet证书和kubeconfig文件

  1. rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
  2. rm -f /opt/kubernetes/ssl/kubelet*

注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除

3. 修改主机名

  1. vi /opt/kubernetes/cfg/kubelet.conf
  2. --hostname-override=k8s-node1
  3. vi /opt/kubernetes/cfg/kube-proxy-config.yml
  4. hostnameOverride: k8s-node1

4. 启动并设置开机启动

  1. systemctl daemon-reload
  2. systemctl start kubelet kube-proxy
  3. systemctl enable kubelet kube-proxy

5. 在Master上批准新Node kubelet证书申请

  1. # 查看证书请求
  2. kubectl get csr
  3. NAME AGE SIGNERNAME REQUESTOR CONDITION
  4. node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro 89s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
  5. # 授权请求
  6. kubectl certificate approve node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro

6. 查看Node状态

  1. kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master1 Ready <none> 47m v1.20.4
  4. k8s-node1 Ready <none> 6m49s v1.20.4

Node2(192.168.31.73 )节点同上。记得修改主机名!

六、部署Dashboard和CoreDNS

6.1 部署Dashboard

  1. kubectl apply -f kubernetes-dashboard.yaml
  2. # 查看部署
  3. kubectl get pods,svc -n kubernetes-dashboard

访问地址:https://NodeIP:30001

创建service account并绑定默认cluster-admin管理员集群角色:

  1. kubectl create serviceaccount dashboard-admin -n kube-system
  2. kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
  3. kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

使用输出的token登录Dashboard。

image



image

6.2 部署CoreDNS

CoreDNS用于集群内部Service名称解析。

  1. kubectl apply -f coredns.yaml
  2. kubectl get pods -n kube-system
  3. NAME READY STATUS RESTARTS AGE
  4. coredns-5ffbfd976d-j6shb 1/1 Running 0 32s

DNS解析测试:

  1. kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
  2. If you don't see a command prompt, try pressing enter.
  3. / # nslookup kubernetes
  4. Server: 10.0.0.2
  5. Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
  6. Name: kubernetes
  7. Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

解析没问题。

至此一个单Master集群就搭建完成了!这个环境就足以满足学习实验了,如果你的服务器配置较高,可继续扩容多Master集群!

七、扩容多Master(高可用架构)

Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。

针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。 而Etcd我们已经采用3个节点组建集群实现高可用,本节将对Master节点高可用进行说明和实施。

Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet和kube-proxy进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。

Master节点主要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube-controller-manager和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。

多Master架构图:

image

7.1 部署Master2 Node

现在需要再增加一台新服务器,作为Master2 Node,IP是192.168.31.74。

为了节省资源你也可以将之前部署好的Worker Node1复用为Master2 Node角色(即部署Master组件)

Master2 与已部署的Master1所有操作一致。所以我们只需将Master1所有K8s文件拷贝过来,再修改下服务器IP和主机名启动即可。

1. 安装Docker

  1. scp /usr/bin/docker* root@192.168.31.74:/usr/bin
  2. scp /usr/bin/runc root@192.168.31.74:/usr/bin
  3. scp /usr/bin/containerd* root@192.168.31.74:/usr/bin
  4. scp /usr/lib/systemd/system/docker.service root@192.168.31.74:/usr/lib/systemd/system
  5. scp -r /etc/docker root@192.168.31.74:/etc
  6. # 在Master2启动Docker
  7. systemctl daemon-reload
  8. systemctl start docker
  9. systemctl enable docker

2. 创建etcd证书目录

在Master2创建etcd证书目录:

  1. mkdir -p /opt/etcd/ssl

3. 拷贝文件(Master1操作)

拷贝Master1上所有K8s文件和etcd证书到Master2:

  1. scp -r /opt/kubernetes root@192.168.31.74:/opt
  2. scp -r /opt/etcd/ssl root@192.168.31.74:/opt/etcd
  3. scp /usr/lib/systemd/system/kube* root@192.168.31.74:/usr/lib/systemd/system
  4. scp /usr/bin/kubectl root@192.168.31.74:/usr/bin
  5. scp -r ~/.kube root@192.168.31.74:~

4. 删除证书文件

删除kubelet证书和kubeconfig文件:

  1. rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
  2. rm -f /opt/kubernetes/ssl/kubelet*

5. 修改配置文件IP和主机名

修改apiserver、kubelet和kube-proxy配置文件为本地IP:

  1. vi /opt/kubernetes/cfg/kube-apiserver.conf
  2. ...
  3. --bind-address=192.168.31.74 --advertise-address=192.168.31.74 ...
  4. vi /opt/kubernetes/cfg/kubelet.conf
  5. --hostname-override=k8s-master2
  6. vi /opt/kubernetes/cfg/kube-proxy-config.yml
  7. hostnameOverride: k8s-master2

6. 启动设置开机启动

  1. systemctl daemon-reload
  2. systemctl start kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
  3. systemctl enable kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy

7. 查看集群状态

修改连接master为本机IP

  1. vi ~/.kube/config
  2. ...
  3. server: https://192.168.31.74:6443
  4. kubectl get cs
  5. NAME STATUS MESSAGE ERROR
  6. scheduler Healthy ok
  7. controller-manager Healthy ok
  8. etcd-1 Healthy {"health":"true"}
  9. etcd-2 Healthy {"health":"true"}
  10. etcd-0 Healthy {"health":"true"}

8. 批准kubelet证书申请

  1. # 查看证书请求
  2. kubectl get csr
  3. NAME AGE SIGNERNAME REQUESTOR CONDITION
  4. node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU 85m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
  5. # 授权请求
  6. kubectl certificate approve node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU
  7. # 查看Node
  8. kubectl get node
  9. NAME STATUS ROLES AGE VERSION
  10. k8s-master1 Ready <none> 34h v1.20.4
  11. k8s-master2 Ready <none> 2m v1.20.4
  12. k8s-node1 Ready <none> 33h v1.20.4
  13. k8s-node2 Ready <none> 33h v1.20.4

如果你在学习中遇到问题或者文档有误可联系wsj~ 微信: XWALY-5

7.2 部署Nginx+Keepalived高可用负载均衡器

kube-apiserver高可用架构图:

image


  • Nginx是一个主流Web服务和反向代理服务器,这里用四层实现对apiserver实现负载均衡。

  • Keepalived是一个主流高可用软件,基于VIP绑定实现服务器双机热备,在上述拓扑中,Keepalived主要根据Nginx运行状态判断是否需要故障转移(漂移VIP),例如当Nginx主节点挂掉,VIP会自动绑定在Nginx备节点,从而保证VIP一直可用,实现Nginx高可用。

注1:为了节省机器,这里与K8s Master节点机器复用。也可以独立于k8s集群之外部署,只要nginx与apiserver能通信就行。

注2:如果你是在公有云上,一般都不支持keepalived,那么你可以直接用它们的负载均衡器产品,直接负载均衡多台Master kube-apiserver,架构与上面一样。

在两台Master节点操作。

1. 安装软件包(主/备)

  1. yum install epel-release -y
  2. yum install nginx keepalived -y

2. Nginx配置文件(主/备一样)

  1. cat > /etc/nginx/nginx.conf << "EOF"
  2. user nginx;
  3. worker_processes auto;
  4. error_log /var/log/nginx/error.log;
  5. pid /run/nginx.pid;
  6. include /usr/share/nginx/modules/*.conf;
  7. events {
  8. worker_connections 1024;
  9. }
  10. # 四层负载均衡,为两台Master apiserver组件提供负载均衡
  11. stream {
  12. log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
  13. access_log /var/log/nginx/k8s-access.log main;
  14. upstream k8s-apiserver {
  15. server 192.168.31.71:6443; # Master1 APISERVER IP:PORT
  16. server 192.168.31.72:6443; # Master2 APISERVER IP:PORT
  17. }
  18. server {
  19. listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
  20. proxy_pass k8s-apiserver;
  21. }
  22. }
  23. http {
  24. log_format main '$remote_addr - $remote_user [$time_local] "$request" '
  25. '$status $body_bytes_sent "$http_referer" '
  26. '"$http_user_agent" "$http_x_forwarded_for"';
  27. access_log /var/log/nginx/access.log main;
  28. sendfile on;
  29. tcp_nopush on;
  30. tcp_nodelay on;
  31. keepalive_timeout 65;
  32. types_hash_max_size 2048;
  33. include /etc/nginx/mime.types;
  34. default_type application/octet-stream;
  35. server {
  36. listen 80 default_server;
  37. server_name _;
  38. location / {
  39. }
  40. }
  41. }
  42. EOF

3. keepalived配置文件(Nginx Master)

  1. cat > /etc/keepalived/keepalived.conf << EOF
  2. global_defs {
  3. notification_email {
  4. acassen@firewall.loc
  5. failover@firewall.loc
  6. sysadmin@firewall.loc
  7. }
  8. notification_email_from Alexandre.Cassen@firewall.loc
  9. smtp_server 127.0.0.1
  10. smtp_connect_timeout 30
  11. router_id NGINX_MASTER
  12. }
  13. vrrp_script check_nginx {
  14. script "/etc/keepalived/check_nginx.sh"
  15. }
  16. vrrp_instance VI_1 {
  17. state MASTER
  18. interface ens33 # 修改为实际网卡名
  19. virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
  20. priority 100 # 优先级,备服务器设置 90
  21. advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
  22. authentication {
  23. auth_type PASS
  24. auth_pass 1111
  25. }
  26. # 虚拟IP
  27. virtual_ipaddress {
  28. 192.168.31.88/24
  29. }
  30. track_script {
  31. check_nginx
  32. }
  33. }
  34. EOF
  • vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)

  • virtual_ipaddress:虚拟IP(VIP)

准备上述配置文件中检查nginx运行状态的脚本:

  1. cat > /etc/keepalived/check_nginx.sh << "EOF"
  2. #!/bin/bash
  3. count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
  4. if [ "$count" -eq 0 ];then
  5. exit 1
  6. else
  7. exit 0
  8. fi
  9. EOF
  10. chmod +x /etc/keepalived/check_nginx.sh

4. keepalived配置文件(Nginx Backup)

  1. cat > /etc/keepalived/keepalived.conf << EOF
  2. global_defs {
  3. notification_email {
  4. acassen@firewall.loc
  5. failover@firewall.loc
  6. sysadmin@firewall.loc
  7. }
  8. notification_email_from Alexandre.Cassen@firewall.loc
  9. smtp_server 127.0.0.1
  10. smtp_connect_timeout 30
  11. router_id NGINX_BACKUP
  12. }
  13. vrrp_script check_nginx {
  14. script "/etc/keepalived/check_nginx.sh"
  15. }
  16. vrrp_instance VI_1 {
  17. state BACKUP
  18. interface ens33
  19. virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
  20. priority 90
  21. advert_int 1
  22. authentication {
  23. auth_type PASS
  24. auth_pass 1111
  25. }
  26. virtual_ipaddress {
  27. 192.168.31.88/24
  28. }
  29. track_script {
  30. check_nginx
  31. }
  32. }
  33. EOF

准备上述配置文件中检查nginx运行状态的脚本:

  1. cat > /etc/keepalived/check_nginx.sh << "EOF"
  2. #!/bin/bash
  3. count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
  4. if [ "$count" -eq 0 ];then
  5. exit 1
  6. else
  7. exit 0
  8. fi
  9. EOF
  10. chmod +x /etc/keepalived/check_nginx.sh

注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移。

5. 启动并设置开机启动

  1. systemctl daemon-reload
  2. systemctl start nginx keepalived
  3. systemctl enable nginx keepalived

6. 查看keepalived工作状态

  1. ip addr
  2. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  4. inet 127.0.0.1/8 scope host lo
  5. valid_lft forever preferred_lft forever
  6. inet6 ::1/128 scope host
  7. valid_lft forever preferred_lft forever
  8. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  9. link/ether 00:0c:29:04:f7:2c brd ff:ff:ff:ff:ff:ff
  10. inet 192.168.31.80/24 brd 192.168.31.255 scope global noprefixroute ens33
  11. valid_lft forever preferred_lft forever
  12. inet 192.168.31.88/24 scope global secondary ens33
  13. valid_lft forever preferred_lft forever
  14. inet6 fe80::20c:29ff:fe04:f72c/64 scope link
  15. valid_lft forever preferred_lft forever

可以看到,在ens33网卡绑定了192.168.31.88 虚拟IP,说明工作正常。

7. Nginx+Keepalived高可用测试

关闭主节点Nginx,测试VIP是否漂移到备节点服务器。

在Nginx Master执行 pkill nginx;
在Nginx Backup,ip addr命令查看已成功绑定VIP。

8. 访问负载均衡器测试

找K8s集群中任意一个节点,使用curl查看K8s版本测试,使用VIP访问:

  1. curl -k https://192.168.31.88:16443/version
  2. {
  3. "major": "1",
  4. "minor": "20",
  5. "gitVersion": "v1.20.4",
  6. "gitCommit": "e87da0bd6e03ec3fea7933c4b5263d151aafd07c",
  7. "gitTreeState": "clean",
  8. "buildDate": "2021-02-18T16:03:00Z",
  9. "goVersion": "go1.15.8",
  10. "compiler": "gc",
  11. "platform": "linux/amd64"
  12. }

可以正确获取到K8s版本信息,说明负载均衡器搭建正常。该请求数据流程:curl -> vip(nginx) -> apiserver

  1. 通过查看Nginx日志也可以看到转发apiserver IP
  2. tail /var/log/nginx/k8s-access.log -f
  3. 192.168.31.71 192.168.31.71:6443 - [02/Apr/2021:19:17:57 +0800] 200 423
  4. 192.168.31.71 192.168.31.72:6443 - [02/Apr/2021:19:18:50 +0800] 200 423

到此还没结束,还有下面最关键的一步。

7.3 修改所有Worker Node连接LB VIP

试想下,虽然我们增加了Master2 Node和负载均衡器,但是我们是从单Master架构扩容的,也就是说目前所有的Worker Node组件连接都还是Master1 Node,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。

因此接下来就是要改所有Worker Node(kubectl get node命令查看到的节点)组件配置文件,由原来192.168.31.71修改为192.168.31.88(VIP)。

在所有Worker Node执行:

  1. sed -i 's#192.168.31.71:6443#192.168.31.88:16443#' /opt/kubernetes/cfg/*
  2. systemctl restart kubelet kube-proxy

检查节点状态:

  1. kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master1 Ready <none> 32d v1.20.4
  4. k8s-master2 Ready <none> 10m v1.20.4
  5. k8s-node1 Ready <none> 31d v1.20.4
  6. k8s-node2 Ready <none> 31d v1.20.4

至此,一套完整的 Kubernetes 高可用集群就部署完成了!

原文链接:https://www.cnblogs.com/xw-01/p/18262554

 友情链接:直通硅谷  点职佳  北美留学生论坛

本站QQ群:前端 618073944 | Java 606181507 | Python 626812652 | C/C++ 612253063 | 微信 634508462 | 苹果 692586424 | C#/.net 182808419 | PHP 305140648 | 运维 608723728

W3xue 的所有内容仅供测试,对任何法律问题及风险不承担任何责任。通过使用本站内容随之而来的风险与本站无关。
关于我们  |  意见建议  |  捐助我们  |  报错有奖  |  广告合作、友情链接(目前9元/月)请联系QQ:27243702 沸活量
皖ICP备17017327号-2 皖公网安备34020702000426号