29.修神篇:手把手系列之二进制部署高可用k8s集群

29.修神篇:手把手系列之二进制部署高可用k8s集群

2020-02-09 14:36:39

        本章节采用纯二进制文件方式部署https(证书有效期为10年)高可用k8s集群,所有涉及的配置文件和镜像均已提供。另外,默认集群规模可支撑254个节点。如果需要调整,请自行修改/etc/kubernetes/controller-manager中的--node-cidr-mask-size=24字段。

1.生产环境高可用设计原则

高可用etcd集群:

需定期备份;并建立3、5或7个节点(奇数个节点),增加一定的冗余能力

高可用Master:

1>kube-apiserver无状态,可多实例部署:借助于Haproxy、nginx或keepalived进行vip流量实现多实例冗余,用户和集群客户端通过vip访问;
2>kuber-scheduler和kuber-controller-manager:只能有一个活动实例,但可以有多个备用(主备模式)。

2.生产环境高可用架构

k8s-ha.png

这是生产环境高可用架构图,为了避免太过复杂,本文apiserver的vip直接通过hosts文件进行了解析,后续你可以接入自己的nginx和lvs组进行本土化落地。

3.基础环境准备

组件 版本
kubernetes v1.13.4
etcd 3.3.11
dockerce 19.03.5
cni v0.8.1
OS Centos6.2
主机名 ip 组件 角色
k8s-etcd-master01.shared 192.168.0.111 etcd/apiserver/controller-manager/scheduler Master
k8s-etcd-master02.shared 192.168.0.112 etcd/apiserver/controller-manager/scheduler Master
k8s-etcd-master03.shared 192.168.0.113 etcd/apiserver/controller-manager/scheduler Master
k8s-node01.shared 192.168.0.114 kubelet/kube-proxy Node

hosts信息和时间同步(略):

192.168.0.111   k8s-etcd-master01.shared   k8s-master01 etcd01 etcd01.ilinux.io k8s-master01.ilinux.io kubernetes-api.ilinux.io
192.168.0.112   k8s-etcd-master02.shared   k8s-master02 etcd02 etcd02.ilinux.io k8s-master02.ilinux.io
192.168.0.113   k8s-etcd-master03.shared   k8s-master03 etcd03 etcd03.ilinux.io k8s-master03.ilinux.io
192.168.0.114   k8s-node01.shared

关闭防火墙:

systemctl stop firewalld.service
systemctl stop iptables.service
systemctl disable firewalld.service
systemctl disable iptables.service

关闭SELINUX:

#临时关闭:
setenforce 0

#永久关闭
vi /etc/selinux/config
将SELINUX=enforcing改为SELINUX=disabled 
设置后需要重启才能生效

禁用swap设备

临时禁用: swapoff  -a

永久禁用:
vim  /etc/fstab 
注释 /dev/mapper/VolGroup-lv_swap swap 行

更新nss curl libcurl(否则git clone会报错)

yum update -y nss curl libcurl

4.Docker环境准备

安装步骤请参考入门篇:用过都说好的大宝剑之kubeadm中的前五步即可

5.Etcd集群部署

文件 路径 说明
etcd.conf /etc/etcd/ 配置文件
pki/* /etc/etcd/ 证书文件
pki/* /root/k8s-certs-generator/etcd 证书生成的路径(备用)
k8s.etcd /var/lib/etcd/ 数据文件,需定期备份

Tips:下面展示如何从0->http集群->https集群的建设全过程,如果打算https一步到位,可跳过4和5两个步骤进行配置。

Master端:

1)安装3.3.11的etcd

yum install -y etcd-3.3.11-2.el7.centos

2)可以通过以下命令检查install是否成功

rpm -ql etcd

3)三台对应修改/etc/etcd/etcd.conf配置文件,其中ETCD_LISTEN_PEER_URLS
、ETCD_LISTEN_CLIENT_URLS、ETCD_NAME、ETCD_INITIAL_ADVERTISE_PEER_URLS和
ETCD_ADVERTISE_CLIENT_URLS需要改成自己对应的信息

#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="http://192.168.0.111:2380"  #集群内相互通信地址
ETCD_LISTEN_CLIENT_URLS="http://192.168.0.111:2379" #客户端访问地址
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="etcd01"           #本节点etcd名
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://etcd01:2380"  #集群初始化监听在哪个地址,可用主机名
ETCD_ADVERTISE_CLIENT_URLS="http://etcd01:2379"      #监听在哪个地址,可用主机名 
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="etcd01=http://etcd01:2380,etcd02=http://etcd02:2380,etcd03=http://etcd03:2380"    #静态初始化集群
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#ETCD_INITIAL_CLUSTER_STATE="new"

4)逆序依次启动etcd

systemctl start etcd
systemctl enable etcd

5)此时高可用etcd集群已经部署完成(但是内部通信是http,非安全协议)

[root@k8s-etcd-master02 /]# etcdctl --endpoints='http://etcd01:2379' member list
b3504381e8ba3cb: name=etcd02 peerURLs=http://etcd02:2380 clientURLs=http://etcd02:2379 isLeader=false
b8b747c74aaea686: name=etcd01 peerURLs=http://etcd01:2380 clientURLs=http://etcd01:2379 isLeader=false
f572fdfc5cb68406: name=etcd03 peerURLs=http://etcd03:2380 clientURLs=http://etcd03:2379 isLeader=true

6)将cert-generator项目git clone到本地,然后使用bash gencerts.sh etcd生成etcd证书,默认域名是ilinux.io,可自行填写,然后回车

cd ~ && git clone https://github.com/Aaron1989/k8s-certs-generator.git
cd k8s-certs-generator

[root@k8s-etcd-master01 cert-generator]# bash gencerts.sh etcd
Enter Domain Name [ilinux.io]:

7)证书生成并归档结果如下:

[root@k8s-etcd-master01 k8s-certs-generator]# tree etcd
etcd
├── patches
│   └── etcd-client-cert.patch        
└── pki
    ├── apiserver-etcd-client.crt     #让apiserver作为客户端与etcd集群通信的证书     
    ├── apiserver-etcd-client.key     #让apiserver作为客户端与etcd集群通信的证书
    ├── ca.crt                        #etcd https测试功能是否成功的证书
    ├── ca.key                        #etcd https测试功能是否成功的证书
    ├── client.crt               #客户端证书,apiserver也可以用这一个
    ├── client.key               #客户端私钥,apiserver也可以用这一个
    ├── peer.crt                 #etcd集群对等通信证书  
    ├── peer.key                 #etcd集群对等通信私钥  
    ├── server.crt               #服务端证书
    └── server.key               #服务端私钥

8)证书分发至各Master节点的/etc/etcd/目录下

cd etcd
#本机
cp -rp pki/ /etc/etcd/ -a

#各节点
scp -rp pki/ etcd02:/etc/etcd/
scp -rp pki/ etcd03:/etc/etcd/

9)修改各Master节点的/etc/etcd/etcd.conf配置文件中的Security段落

#[Security]
ETCD_CERT_FILE="/etc/etcd/pki/server.crt"
ETCD_KEY_FILE="/etc/etcd/pki/server.key"
ETCD_CLIENT_CERT_AUTH="true"                        #服务端必须验证客户端证书 
ETCD_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt"
#ETCD_AUTO_TLS="false"
ETCD_PEER_CERT_FILE="/etc/etcd/pki/peer.crt"
ETCD_PEER_KEY_FILE="/etc/etcd/pki/peer.key"
ETCD_PEER_CLIENT_CERT_AUTH="true"                    #集群间必须相互验证证书
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt"
#ETCD_PEER_AUTO_TLS="false"

10)将第三步修改的http全改成https,ETCD_DATA_DIR修改成新地址,ETCD_NAME="etcd03.ilinux.io"修改成和ca域名设置时的Domain Name保持一致,最后修改ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"

[Member]
ETCD_DATA_DIR="/var/lib/etcd/k8s.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.113:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.113:2379"
ETCD_NAME="etcd03.ilinux.io"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://etcd03.ilinux.io:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://etcd03.ilinux.io:2379"
ETCD_INITIAL_CLUSTER="etcd01.ilinux.io=https://etcd01.ilinux.io:2380,etcd02.ilinux.io=https://etcd02.ilinux.io:2380,etcd03.ilinux.io=https://etcd03.ilinux.io:2380"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"

#[Security]
ETCD_CERT_FILE="/etc/etcd/pki/server.crt"
ETCD_KEY_FILE="/etc/etcd/pki/server.key"
ETCD_CLIENT_CERT_AUTH="true"                 #服务端必须验证客户端证书
ETCD_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt"
ETCD_PEER_CERT_FILE="/etc/etcd/pki/peer.crt"
ETCD_PEER_KEY_FILE="/etc/etcd/pki/peer.key"
ETCD_PEER_CLIENT_CERT_AUTH="true"             #集群间必须相互验证证书
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt"

11)全部停止并重启etcd,并用之前生成功能测试的ca客户端证书进行访问,至此etcd https集群已经部署完成

#全停止
systemctl stop etcd

#全启动
systemctl start etcd

#使用证书查看集群状态    
[root@k8s-etcd-master01 etcd]# etcdctl --endpoints='https://etcd01.ilinux.io:2379' --cert-file=/etc/etcd/pki/client.crt --key-file=/etc/etcd/pki/client.key --ca-file=/etc/etcd/pki/ca.crt cluster-health
member 1f22dc5568642e6f is healthy: got healthy result from https://etcd03.ilinux.io:2379
member 433f227ff9ad65cd is healthy: got healthy result from https://etcd02.ilinux.io:2379
member c4eb31a06cd36dd7 is healthy: got healthy result from https://etcd01.ilinux.io:2379
cluster is healthy

6.Master配置

文件 路径 说明
.* /etc/kubernetes master端实际配置文件
auth/* /etc/kubernetes auth证书文件
pki/* /etc/kubernetes pki证书文件
token.csv /etc/kubernetes 引导令牌文件
token.csv /root/k8s-certs-generator/kubernetes/k8s-master01 引导令牌文件(备用)
.* /root/k8s-certs-generator/kubernetes 证书生成的路径(备用)
.* /usr/local/kubernetes/server/bin 二进制启动文件
kube-apiserver.service /usr/lib/systemd/system apiserver的启动配置文件
kube-controller-manager.service /usr/lib/systemd/system controller-manager的启动配置文件
kube-scheduler.service /usr/lib/systemd/system scheduler的启动配置文件
.* /var/run/kubernetes k8s运行目录

1)生成必要的证书和密钥,包括访问etcd集群时用到的客户端证书和私钥

#进入证书生成器的目录
cd /root/k8s-certs-generator

#生成k8s相关证书
[root@k8s-etcd-master01 k8s-certs-generator]# bash gencerts.sh k8s
Enter Domain Name [ilinux.io]:                    #不需要动,需要和etcd配置时保持一致
Enter Kubernetes Cluster Name [kubernetes]:       #可自定义
Enter the IP Address in default namespace 
  of the Kubernetes API Server[10.96.0.1]:        #不需要改
Enter Master servers name[master01 master02 master03]: k8s-master01 k8s-master02 k8s-master03      
                                                  #master名,与Domain Name拼接

2)所需证书已经全部生成并归档

[root@k8s-etcd-master01 k8s-certs-generator]# tree kubernetes/
kubernetes/
├── CA
│   ├── ca.crt
│   └── ca.key
├── front-proxy
│   ├── front-proxy-ca.crt
│   ├── front-proxy-ca.key
│   ├── front-proxy-client.crt
│   └── front-proxy-client.key
├── ingress
│   ├── ingress-server.crt
│   ├── ingress-server.key
│   └── patches
│       └── ingress-tls.patch
├── k8s-master01
│   ├── auth
│   │   ├── admin.conf
│   │   ├── controller-manager.conf
│   │   └── scheduler.conf
│   ├── pki
│   │   ├── apiserver.crt
│   │   ├── apiserver-etcd-client.crt
│   │   ├── apiserver-etcd-client.key
│   │   ├── apiserver.key
│   │   ├── apiserver-kubelet-client.crt
│   │   ├── apiserver-kubelet-client.key
│   │   ├── ca.crt
│   │   ├── ca.key
│   │   ├── front-proxy-ca.crt
│   │   ├── front-proxy-ca.key
│   │   ├── front-proxy-client.crt
│   │   ├── front-proxy-client.key
│   │   ├── kube-controller-manager.crt
│   │   ├── kube-controller-manager.key
│   │   ├── kube-scheduler.crt
│   │   ├── kube-scheduler.key
│   │   ├── sa.key
│   │   └── sa.pub
│   └── token.csv
├── k8s-master02
│   ├── auth
│   │   ├── admin.conf
│   │   ├── controller-manager.conf
│   │   └── scheduler.conf
│   ├── pki
│   │   ├── apiserver.crt
│   │   ├── apiserver-etcd-client.crt
│   │   ├── apiserver-etcd-client.key
│   │   ├── apiserver.key
│   │   ├── apiserver-kubelet-client.crt
│   │   ├── apiserver-kubelet-client.key
│   │   ├── ca.crt
│   │   ├── ca.key
│   │   ├── front-proxy-ca.crt
│   │   ├── front-proxy-ca.key
│   │   ├── front-proxy-client.crt
│   │   ├── front-proxy-client.key
│   │   ├── kube-controller-manager.crt
│   │   ├── kube-controller-manager.key
│   │   ├── kube-scheduler.crt
│   │   ├── kube-scheduler.key
│   │   ├── sa.key
│   │   └── sa.pub
│   └── token.csv
├── k8s-master03
│   ├── auth
│   │   ├── admin.conf
│   │   ├── controller-manager.conf
│   │   └── scheduler.conf
│   ├── pki
│   │   ├── apiserver.crt
│   │   ├── apiserver-etcd-client.crt
│   │   ├── apiserver-etcd-client.key
│   │   ├── apiserver.key
│   │   ├── apiserver-kubelet-client.crt
│   │   ├── apiserver-kubelet-client.key
│   │   ├── ca.crt
│   │   ├── ca.key
│   │   ├── front-proxy-ca.crt
│   │   ├── front-proxy-ca.key
│   │   ├── front-proxy-client.crt
│   │   ├── front-proxy-client.key
│   │   ├── kube-controller-manager.crt
│   │   ├── kube-controller-manager.key
│   │   ├── kube-scheduler.crt
│   │   ├── kube-scheduler.key
│   │   ├── sa.key
│   │   └── sa.pub
│   └── token.csv
└── kubelet
    ├── auth
    │   ├── bootstrap.conf
    │   └── kube-proxy.conf
    └── pki
        ├── ca.crt
        ├── kube-proxy.crt
        └── kube-proxy.key

3)将证书分发至各节点

#各节点
mkdir /etc/kubernetes

#本节点操作
cp -r kubernetes/k8s-master01/* /etc/kubernetes/

#其他节点
scp -rp kubernetes/k8s-master02/* k8s-master02:/etc/kubernetes/
scp -rp kubernetes/k8s-master03/* k8s-master03:/etc/kubernetes/

4)获取v1.13.4二进制k8s文件,解压缩至/usr/local,并分发至其余master节点一份

#获取镜像
docker pull registry.cn-hangzhou.aliyuncs.com/aaron89/k8s_bin:v1.13.4

#解压二进制文件    
docker run --rm -d --name temp registry.cn-hangzhou.aliyuncs.com/aaron89/k8s_bin:v1.13.4 sleep 10
docker cp temp:/kubernetes-server-linux-amd64.tar.gz .
tar xf kubernetes-server-linux-amd64.tar.gz  -C /usr/local/

#分发二进制文件
scp kubernetes-server-linux-amd64.tar.gz k8s-etcd-master02.shared:~
scp kubernetes-server-linux-amd64.tar.gz k8s-etcd-master03.shared:~

5)将https://github.com/Aaron1989/k8s-bin-inst项目克隆下来,并将相应配置文件cp到对应路径

#本节点
cd ~ && git clone https://github.com/Aaron1989/k8s-bin-inst
cd k8s-bin-inst
cp master/etc/kubernetes/* /etc/kubernetes/
cp master/unit-files/* /usr/lib/systemd/system/

#各master节点
scp master/etc/kubernetes/* k8s-master02:/etc/kubernetes/
scp master/unit-files/* k8s-master02:/usr/lib/systemd/system/

scp master/etc/kubernetes/* k8s-master03:/etc/kubernetes/
scp master/unit-files/* k8s-master03:/usr/lib/systemd/system/

6)修改apiserver配置文件中的KUBE_ETCD_SERVERS,另外config文件中的日志级别是0(Debug),先不动,为了测试,测试完成后记得把日志级别调高!

KUBE_ETCD_SERVERS="--etcd-servers=https://etcd01.ilinux.io:2379,https://etcd02.ilinux.io:2379,https://etcd03.ilinux.io:2379"

7)创建kube用户、kubernetes运行目录和权限

useradd -r kube
mkdir /var/run/kubernetes
chown kube.kube /var/run/kubernetes/

8)启动apiserver,并查看status是否正常.至此apiserver已经成功启动,并连接etcd集群和相关证书

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
systemctl status kube-apiserver

9)配置kubectl,并使用kubectl config view查看配置是否正常

mkdir ~/.kube
ln -sv /usr/local/kubernetes/server/bin/kubectl /usr/bin/
cp /etc/kubernetes/auth/admin.conf ~/.kube/config

#kubectl config view   
[root@k8s-etcd-master01 auth]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://k8s-master01.ilinux.io:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: k8s-admin
  name: k8s-admin@kubernetes
current-context: k8s-admin@kubernetes
kind: Config
preferences: {}
users:
- name: k8s-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

10)使用get nodes命令,如果没error就说明一切正常!

[root@k8s-etcd-master01 auth]# kubectl get nodes
No resources found.

11)创建ClusterRoleBinding,将/etc/kubernetes/token.csv(引导token)中创建用户或者用户组(二选一即可)绑定至内建的允许引导令牌功能的clusterrole:system:node-bootstrapper上。这里我使用的是system:bootstrapper用户。

# token.csv说明:用户(system:bootstrapper),组(system:bootstrappers)
b21f94.fbff38f94cfd0713,"system:bootstrapper",10001,"system:bootstrappers"

#授权        
kubectl create clusterrolebinding system:bootstrapper --user=system:bootstrapper --clusterrole=system:node-bootstrapper

12)启动kube-controller-manager和kube-scheduler

#controller-manager
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
systemctl status kube-controller-manager

#scheduler    
systemctl start kube-scheduler
systemctl enable kube-scheduler    
systemctl status kube-scheduler

13)此时单台master已经配置完毕

[root@k8s-etcd-master01 kubernetes]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}

7.Node配置

文件 路径 说明
.* /etc/kubernetes node端实际配置文件
auth/* /etc/kubernetes auth证书文件
pki/* /etc/kubernetes pki证书文件
kubelet /usr/local/kubernetes/node/bin kubelet二进制启动文件
kube-proxy /usr/local/kubernetes/node/bin kube-proxy二进制启动文件
kubelet /var/lib/ kubelet的配置文件
kube-proxy /var/lib/ kube-proxy的配置文件
kubelet.service /usr/lib/systemd/system kubelet的service文件
kube-proxy.service /usr/lib/systemd/system kube-proxy的service文件
.* /opt/cni/bin cni运行文件
ipvs.modules /etc/sysconfig/modules/ipvs.modules ipvs脚本

你需要自行将环境准备环节和dockerce环境初始化好
1) 准备配置文件和证书

#将kube-proxy和kubelet整个目录复制到/var/lib/下面
cp -rp /root/k8s-bin-inst/nodes/var/lib/kube-proxy/ /var/lib/
cp -rp /root/k8s-bin-inst/nodes/var/lib/kubelet/ /var/lib/

#将kubernetes/目录整个复制到/etc下
cp -rp /root/k8s-bin-inst/master/etc/kubernetes/ /etc/  

#将Master端生成的kubelet证书分发至node节点的/etc/kubernetes/下     
#master端:
cd ~/k8s-certs-generator/kubernetes/kubelet/
scp -r * k8s-node01.shared:/etc/kubernetes/

2) 下载cni插件,并放到opt/cni/bin目录下

wget https://github.com/containernetworking/plugins/releases/download/v0.8.1/cni-plugins-linux-amd64-v0.8.1.tgz
mkdir -p /opt/cni/bin
tar xf cni-plugins-linux-amd64-v0.8.1.tgz  -C /opt/cni/bin/

3)准备kublet和kubeproxy的service文件

#将node/unit-files/*放到/usr/lib/systemd/system目录
scp /root/k8s-bin-inst/nodes/unit-files/* k8s-node01.shared:/usr/lib/systemd/system/

4)创建bin目录,并从master端分发kubelet和kubeproxy的二进制文件

mkdir -p /usr/local/kubernetes/node/bin/

#在拉取过k8s二进制代码的master节点上操作    
scp /usr/local/kubernetes/server/bin/kube{let,-proxy} k8s-node01.shared:/usr/local/kubernetes/node/bin/

5)启动kubelet

systemctl start  kubelet
systemctl enable  kubelet
systemctl status  kubelet

6)master端确认加入集群的请求

#查询请求
[root@k8s-etcd-master01 auth]# kubectl get csr
NAME                                                   AGE     REQUESTOR             CONDITION
node-csr-O1ThCQzmKSWv7aUvCBJLF0U2A-FJY73d3l9ui2Zdf74   5m48s   system:bootstrapper   Pending

#签署请求
[root@k8s-etcd-master01 auth]# kubectl certificate approve node-csr-O1ThCQzmKSWv7aUvCBJLF0U2A-FJY73d3l9ui2Zdf74
certificatesigningrequest.certificates.k8s.io/node-csr-O1ThCQzmKSWv7aUvCBJLF0U2A-FJY73d3l9ui2Zdf74 approved

# node已经加入,但是还没ready
[root@k8s-etcd-master01 auth]# kubectl get nodes
NAME                STATUS     ROLES    AGE     VERSION
k8s-node01.shared   NotReady   <none>   2m44s   v1.13.4

7)启用ipvs内核模块
创建内核模块载入相关的脚本文件/etc/sysconfig/modules/ipvs.modules,设定自动载入的内核模块。文件内容如下:

#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir | grep -o "^[^.]*"); do
    /sbin/modinfo -F filename $i  &> /dev/null
    if [ $? -eq 0 ]; then
        /sbin/modprobe $i
    fi
done

# 赋权、运行并检查    
chmod +x /etc/sysconfig/modules/ipvs.modules
/etc/sysconfig/modules/ipvs.modules
lsmod |grep ip_vs

8)启动kube-proxy

[root@k8s-node01 kubernetes]# systemctl  start kube-proxy
[root@k8s-node01 kubernetes]# systemctl  enable kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@k8s-node01 kubernetes]# systemctl  status kube-proxy

9)拉取flannel所需镜像

#node节点
docker pull registry.cn-hangzhou.aliyuncs.com/aaron89/flannel:v0.11.0-amd64 
docker tag registry.cn-hangzhou.aliyuncs.com/aaron89/flannel:v0.11.0-amd64    quay.io/coreos/flannel:v0.11.0-amd64

docker pull registry.cn-hangzhou.aliyuncs.com/aaron89/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/aaron89/pause:3.1 k8s.gcr.io/pause:3.1

10)master端apply flannel插件

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

11)此时flannel的pod已经成功运行,而且node的状态也已经是ready了

[root@k8s-etcd-master01 ~]# kubectl get pod -n kube-system -o wide
NAME                          READY   STATUS    RESTARTS   AGE   IP              NODE                NOMINATED NODE   READINESS GATES
kube-flannel-ds-amd64-cj8rh   1/1     Running   0          37m   192.168.0.114   k8s-node01.shared   <none>           <none>

[root@k8s-etcd-master01 ~]# kubectl get node
NAME                STATUS   ROLES    AGE   VERSION
k8s-node01.shared   Ready    <none>   93m   v1.13.4

8.组件高可用扩展-apiserver

8.1.其余master节点

1)创建kube用户、kubernetes运行目录和权限

useradd -r kube
mkdir /var/run/kubernetes
chown kube.kube /var/run/kubernetes/

2)启动apiserver,并查看status是否正常.至此apiserver已经成功启动,连接etcd集群和相关证书

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
systemctl status kube-apiserver

3)配置kubectl(可选)

mkdir ~/.kube
ln -sv /usr/local/kubernetes/server/bin/kubectl /usr/bin/
cp /etc/kubernetes/auth/admin.conf ~/.kube/config

#使用kubelet命令查看
[root@k8s-etcd-master02 kubernetes]# kubectl get node
NAME                STATUS   ROLES    AGE   VERSION
k8s-node01.shared   Ready    <none>   22h   v1.13.4

4)集群的高可用链接地址,需要用vip或者多个A记录进行冗余。
本章暂时就通过hosts解析在mater01上。

高可用集群接入地址:

https://kubernetes-api.ilinux.io:6443

配置文件位置如下:

#master:
/root/k8s-certs-generator/kubernetes/kubelet/auth/bootstrap.conf和kube-proxy.conf

#node:
/etc/kubernetes/auth/bootstrap.conf和kube-proxy.conf

9.组件高可用扩展-controller-manager

9.1.其余master节点

systemctl start kube-controller-manager
systemctl enable kube-controller-manager
systemctl status kube-controller-manager

10.组件高可用扩展-scheduler

10.1.其余master节点

systemctl start kube-scheduler
systemctl enable kube-scheduler
systemctl status kube-scheduler

11.CoreDns部署

1)Master01上进行操作

docker pull registry.cn-hangzhou.aliyuncs.com/aaron89/coredns:1.6.6
docker tag registry.cn-hangzhou.aliyuncs.com/aaron89/coredns:1.6.6 coredns/coredns:1.6.6
wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh
bash deploy.sh -i 10.96.0.10 -r "10.96.0.0/12" -s -t coredns.yaml.sed | kubectl apply -f -

2)解析测试

#初始化pod
cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

#测试
[root@k8s-etcd-mater01 ~]# kubectl exec -ti busybox -- nslookup kubernetes
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

12.集群高可用测试

12.1.Etcd高可用测试

#现在etcd02是leader节点
[root@k8s-etcd-master01 unit-files]# etcdctl --endpoints='https://etcd01.ilinux.io:2379' --cert-file=/etc/etcd/pki/client.crt --key-file=/etc/etcd/pki/client.key --ca-file=/etc/etcd/pki/ca.crt member list
1f22dc5568642e6f: name=etcd03.ilinux.io peerURLs=https://etcd03.ilinux.io:2380 clientURLs=https://etcd03.ilinux.io:2379 isLeader=false
433f227ff9ad65cd: name=etcd02.ilinux.io peerURLs=https://etcd02.ilinux.io:2380 clientURLs=https://etcd02.ilinux.io:2379 isLeader=true
c4eb31a06cd36dd7: name=etcd01.ilinux.io peerURLs=https://etcd01.ilinux.io:2380 clientURLs=https://etcd01.ilinux.io:2379 isLeader=false

#在etcd02上关闭etcd    
systemctl stop etcd

#此时已经进行重新选举etcd03成了leader
[root@k8s-etcd-master01 unit-files]# etcdctl --endpoints='https://etcd01.ilinux.io:2379' --cert-file=/etc/etcd/pki/client.crt --key-file=/etc/etcd/pki/client.key --ca-file=/etc/etcd/pki/ca.crt member list
1f22dc5568642e6f: name=etcd03.ilinux.io peerURLs=https://etcd03.ilinux.io:2380 clientURLs=https://etcd03.ilinux.io:2379 isLeader=true
433f227ff9ad65cd: name=etcd02.ilinux.io peerURLs=https://etcd02.ilinux.io:2380 clientURLs=https://etcd02.ilinux.io:2379 isLeader=false
c4eb31a06cd36dd7: name=etcd01.ilinux.io peerURLs=https://etcd01.ilinux.io:2380 clientURLs=https://etcd01.ilinux.io:2379 isLeader=false

#此时集群状态是degraded降级,且k8s集群功能正常,表示etcd集群高可用验证成功
[root@k8s-etcd-master01 unit-files]# etcdctl --endpoints='https://etcd01.ilinux.io:2379' --cert-file=/etc/etcd/pki/client.crt --key-file=/etc/etcd/pki/client.key --ca-file=/etc/etcd/pki/ca.crt cluster-health
member 1f22dc5568642e6f is healthy: got healthy result from https://etcd03.ilinux.io:2379
failed to check the health of member 433f227ff9ad65cd on https://etcd02.ilinux.io:2379: Get https://etcd02.ilinux.io:2379/health: dial tcp 192.168.0.112:2379: connect: connection refused
member 433f227ff9ad65cd is unreachable: [https://etcd02.ilinux.io:2379] are all unreachable
member c4eb31a06cd36dd7 is healthy: got healthy result from https://etcd01.ilinux.io:2379
cluster is degraded

#[root@k8s-etcd-master01 unit-files]# kubectl get node
NAME                STATUS   ROLES    AGE   VERSION
k8s-node01.shared   Ready    <none>   22h   v1.13.4

#最后我们回复etcd02节点,发现集群状态已经恢复成healthy
[root@k8s-etcd-master01 unit-files]# etcdctl --endpoints='https://etcd01.ilinux.io:2379' --cert-file=/etc/etcd/pki/client.crt --key-file=/etc/etcd/pki/client.key --ca-file=/etc/etcd/pki/ca.crt cluster-health
member 1f22dc5568642e6f is healthy: got healthy result from https://etcd03.ilinux.io:2379
member 433f227ff9ad65cd is healthy: got healthy result from https://etcd02.ilinux.io:2379
member c4eb31a06cd36dd7 is healthy: got healthy result from https://etcd01.ilinux.io:2379
cluster is healthy

12.2.kube-controller-manager高可用测试

#当前controller-manager使用的是master01的组件(一定要注意:这是主备模式的组件,有一个工作就行),并且探测周期为15秒
[root@k8s-etcd-master02 kubernetes]# kubectl get endpoints -n kube-system kube-controller-manager -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-etcd-master01.shared_25338479-2400-11ea-ac38-001c425c73bc","leaseDurationSeconds":15,"acquireTime":"2019-12-22T09:30:00Z","renewTime":"2019-12-22T11:07:15Z","leaderTransitions":3}'
  creationTimestamp: "2019-12-20T13:26:28Z"
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "35332"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: 54417b97-232c-11ea-a207-001c425c73bc

#关闭master01的controller-manager
systemctl stop kube-controller-manager

#此时我们发现controller-manager已经切换至k8s-etcd-master02,且一切功能正常
[root@k8s-etcd-master02 kubernetes]# kubectl get endpoints -n kube-system kube-controller-manager -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-etcd-master02.shared_f16dffb2-24a7-11ea-89b4-001c42662fdd","leaseDurationSeconds":15,"acquireTime":"2019-12-22T11:11:01Z","renewTime":"2019-12-22T11:12:27Z","leaderTransitions":4}'
  creationTimestamp: "2019-12-20T13:26:28Z"
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "35880"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: 54417b97-232c-11ea-a207-001c425c73bc

#最后我们恢复master01的controller-manager,controller-manager使用的还是k8s-etcd-master02,和预期一致,测试成功
systemctl start kube-controller-manager

12.3.kube-scheduler高可用测试

同上,不再单独演示

13.证书过期时间修改和查询

最后,我要说一下kubernetes默认证书1年,本章提供证书已经改为10年,你已经不需要调整;当然也可以通过修改源码的方式,修改kubernetes默认证书时间
1) 拉取源码

cd /data && git clone https://github.com/kubernetes/kubernetes.git

2) 切换到指定版本,以V1.12.3为例

git checkout -b remotes/origin/release-1.12  v1.12.3

3) 安装go环境

cd /data/soft && wget https://dl.google.com/go/go1.11.2.linux-amd64.tar.gz
tar zxvf go1.11.2.linux-amd64.tar.gz  -C /usr/local 

#编辑/etc/profile文件添加如下:

#go setting
export GOROOT=/usr/local/go
export GOPATH=/usr/local/gopath
export PATH=$PATH:$GOROOT/bin

source /etc/profile 生效

#验证:

go version
go version go1.11.2 linux/amd64

4) 修改源码:
/data/kubernetes/staging/src/k8s.io/client-go/util/cert/cert.go

112  NotAfter:     time.Now().Add(duration365d * 10).UTC(),
187  NotAfter:  validFrom.Add(maxAge *10),
215  NotAfter:  validFrom.Add(maxAge * 10),

原来1年 ; * 10 表示10年

5) 编译:

cd /data/kubernetes/ && make WHAT=cmd/kubeadm

6) 查看证书过期时间

cd /etc/kubernetes/pki

openssl x509 -in front-proxy-client.crt   -noout -text  |grep Not
            Not Before: Nov 28 09:07:02 2018 GMT
            Not After : Nov 25 09:07:03 2028 GMT

openssl x509 -in apiserver.crt   -noout -text  |grep Not
            Not Before: Nov 28 09:07:04 2018 GMT
            Not After : Nov 25 09:07:04 2028 GMT

14.结束语

至此,一个高可用的纯二进制kubernetes集群已经部署和测试完毕了;

同时,我们的专题课程:<21天完美通关kubernetes>也已经落下了帷幕,感谢大家订阅了我的收费专栏,也希望大家感谢一下自己,感谢一下那个坚持了21天,只为系统学习kubernetes的自己~

最后,衷心希望我的专题能给大家带来系统且全面的学习、知识积累以及工作上的帮助~

那么,我们下一个专题见,拜拜~

@版权声明:51CTO独家出品,未经允许不能转载,否则追究法律责任

版权声明:
作者:WaterBear
链接:https://l-t.top/2065.html
来源:雷霆运维
文章版权归作者所有,未经允许请勿转载。

THE END
分享
二维码
< <上一篇
下一篇>>
文章目录
关闭
目 录