Openeuler使用kubeadm快速搭建K8s集群-3主3从【V1.20】

kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。

本文所涉及的yaml文件:

calico.yaml
kubernertes-dashboard.yaml

1. 安装要求

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

  • 6台机器,操作系统Openeuler22.03 LTS SP4
  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像,【镜像代理自行解决】
  • 禁止swap分区

2. 准备环境

角色 IP
k8s-master01 172.22.33.215
k8s-master02 172.22.33.216
k8s-master03 172.22.33.217
k8s-node01 172.22.33.218
k8s-node02 172.22.33.219
k8s-node03 172.22.33.220
关闭防火墙:
$ systemctl stop firewalld
$ systemctl disable firewalld

关闭selinux:
$ sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
$ setenforce 0  # 临时

关闭swap:[临时和永久关闭]
#临时关闭
$ swapoff -a

#永久关闭
$ sed -ri 's/.*swap.*/#&/' /etc/fstab


设置主机名:
$ hostnamectl set-hostname k8s-master01
$ hostnamectl set-hostname k8s-master02
$ hostnamectl set-hostname k8s-master03
$ hostnamectl set-hostname k8s-node01
$ hostnamectl set-hostname k8s-node02
$ hostnamectl set-hostname k8s-node03


#在所有节点添加hosts:
$ cat >> /etc/hosts << EOF
172.22.33.215 k8s-master01
172.22.33.216 k8s-master02
172.22.33.217 k8s-master03
172.22.33.218 k8s-node01
172.22.33.219 k8s-node02
172.22.33.220 k8s-node03
EOF


#开启内核路由转发
sed -i 's/net.ipv4.ip_forward=0/net.ipv4.ip_forward=1/g' /etc/sysctl.conf


#将桥接的IPv4,IPV6流量传递到iptables的链:
$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
EOF

# 生效
$ sysctl --system


#每个节点都需要安装 IPVS 的相关工具和加载ipvs内核模块
$ yum install ipvsadm

#在所有节点执行以下命令
$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

$ chmod 755 /etc/sysconfig/modules/ipvs.modules
$ bash /etc/sysconfig/modules/ipvs.modules

#查看IPVS模块加载情况
$ lsmod | grep -e ip_vs -e nf_conntrack_ipv4
#能看到ip_vs ip_vs_rr ip_vs_wrr  ip_vs_sh nf_conntrack_ipv4  加载成功


时间同步:
$ yum install ntpdate -y
$ ntpdate ntp.ntsc.ac.cn

3. 安装Docker/kubeadm/kubelet【所有节点】

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。

3.1.1 二进制包下载

docker-ce 社区下载地址:

$ wget https://mirrors.nju.edu.cn/docker-ce/linux/static/stable/x86_64/docker-20.10.24.tgz

或者
内网下载地址:

$ wget https://download.tsingyun.link/linux-soft/docker/docker-20.10.24.tgz

3.1.2 解压,拷贝至/usr/bin 下

$ tar -xf docker-20.10.24.tgz
$ cp docker/* /usr/bin
$ which docker

3.1.3 编写docker.service文件

$ cat > /etc/systemd/system/docker.service <<EOF

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=65535
LimitNPROC=65535
LimitCORE=65535
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

EOF

3.1.4 挂载docker默认存储路径[可选]

docker的默认工作路径在/var/lib/docker ,最好不修改默认存储路径,可以做软链接

#建立工作目录
$ mkdir /home/application/

#格式化磁盘
$ mkfs.ext4 /dev/sdb

#磁盘永久挂载
$ vim /etc/fstab
/dev/sdb  /home/application  ext4 defaults 0 0

#使挂载生效
$ mount -a


# 创建docker 工作目录
$ mkdir /home/application/docker


#创建软链接
$ ln -s /home/application/docker /var/lib/

3.1.5 添加可执行权限

$ chmod +x /etc/systemd/system/docker.service

3.1.6 启动,加载,开机自启动

$ systemctl daemon-reload
$ systemctl start docker.service
$ systemctl enable docker.service

3.1.7 配置镜像加速器

$ mkdir -p /etc/docker
$ tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://docker.m.daocloud.io"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

$ systemctl daemon-reload
$ systemctl restart docker

:warning:tips:
如果仓库不是https 的,需要在 /etc/docker/daemon.json 中 添加忽略;比如下方的示例

"insecure-registries": [
    "https://harbor.hse.com",
    "https://it-docker.pkg.devops.sinochem.com"
  ]

3.2.1 添加阿里云YUM软件源

$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

3.2.1 安装kubeadm,kubelet和kubectl

由于版本更新频繁,这里指定版本号部署:

# 查看所有的可用版本
$ yum list  kubeadm  kubelet kubectl --showduplicates | sort -r


#在所有节点安装
$ yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0
$ systemctl enable kubelet

4. 部署Kubernetes Master

https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#initializing-your-control-plane-node

4.1 准备好kubeadm-init.yaml初始化文件

---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 0B5UxJWeNvMmJ2T3
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.22.33.215     #指定master01 ip地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01                  #指定master01主机名
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
    - "kubernetes"
    - "kubernetes.default"
    - "kubernetes.default.svc"
    - "kubernetes.default.svc.cluster.local"
    - "172.22.33.215"         #填写所有master节点IP,VIP, 以及公网IP,域名
    - "172.22.33.216"
    - "172.22.33.217"
    - "112.94.71.21"
    - "k8s.srebro.cn"
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
controlPlaneEndpoint: "172.22.33.215:36443"   #增加kubeapiserver集群ip地址和端口,就是VIP
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.15
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"

4.2 在所有master 节点上,pull 镜像到本地

把kubeadm-init.yaml 文件传到每一个master 节点上

#在每个master 节点创建K8S,init 的工作目录
$ mkdir -p /etc/kubernetes/init

$ ls -l /etc/kubernetes/init/kubeadm-init.yaml 
-rw-r--r-- 1 root root 1431  8月  9 16:19 /etc/kubernetes/init/kubeadm-init.yaml

init 之前,先下载镜像

#k8s-master01 节点上
$ kubeadm config images pull --config kubeadm-init.yaml

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.15
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.15
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.15
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.20.15
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0



#k8s-master02 节点上
$ kubeadm config images pull --config kubeadm-init.yaml

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.15
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.15
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.15
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.20.15
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0


#k8s-master03 节点上
$ kubeadm config images pull --config kubeadm-init.yaml

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.15
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.15
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.15
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.20.15
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0


4.3 在master01 节点上,完成kubeadm 初始化

$ cd /etc/kubernetes/init
$ kubeadm init --config kubeadm-init.yaml


[init] Using Kubernetes version: v1.20.15
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.18. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.22.33.215 172.22.33.216 172.22.33.217 218.94.71.250]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [172.22.33.215 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [172.22.33.215 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.003320 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
  kubeadm join 172.22.33.215:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:1105970dfcca841937c09964e46f3fa14a53672105462e50108d95279ed5dc7c \
    --control-plane 

    
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.22.33.215:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:1105970dfcca841937c09964e46f3fa14a53672105462e50108d95279ed5dc7c 

拷贝kubectl使用的连接k8s认证文件到默认路径

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

4.4 加入 Kubernetes Master 节点

ps: 因为我们的K8S集群已经初始化过了,再添加其他master 节点到集群中,首要添加是把已经初始化过的证书拷贝到其他master 节点上

4.4.1 在master02和master03节点复制相关证书

#创建证书工作目录
$ mkdir -p /etc/kubernetes/pki/etcd

$ scp -r 172.22.33.215/etc/kubernetes/pki/ca.* /etc/kubernetes/pki/
$ scp -r 172.22.33.215:/etc/kubernetes/pki/sa.* /etc/kubernetes/pki/

$ scp -r 172.22.33.215:/etc/kubernetes/pki/front-proxy-ca.* /etc/kubernetes/pki/
$ scp -r 172.22.33.215:/etc/kubernetes/pki/etcd/ca.* /etc/kubernetes/pki/etcd/

$ scp -r 172.22.33.215:/etc/kubernetes/admin.conf /etc/kubernetes/

4.4.2 把 master02和master03节点 添加到Kubernetes 集群中

使用刚刚 master01 在kubeadm init输出的kubeadm join命令:

#加入控制平面
$ kubeadm join 172.22.33.215:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:1105970dfcca841937c09964e46f3fa14a53672105462e50108d95279ed5dc7c \
    --control-plane 

4.4.3 在master01节点上查看master节点个数

$ kubectl get node
NAME           STATUS     ROLES                  AGE     VERSION
k8s-master01   NotReady   control-plane,master   8m26s   v1.20.15
k8s-master02   NotReady   control-plane,master   2m54s   v1.20.15
k8s-master03   NotReady   control-plane,master   59s     v1.20.15

可以看到,所有的master 节点都已经加入到集群中了;另外以上NotReady等待CNI网络插件安装

5. 加入Kubernetes Node 节点

此操作,都是在node 节点上操作

k8s-node01 172.22.33.218
k8s-node02 172.22.33.219
k8s-node03 172.22.33.220

使用刚刚master01 在kubeadm init输出的kubeadm join命令:

$ kubeadm join 172.22.33.215:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:1105970dfcca841937c09964e46f3fa14a53672105462e50108d95279ed5dc7c 

默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,可以直接在master 节点上,使用命令快捷生成:

$ kubeadm token create --print-join-command

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/

6. 部署容器网络(CNI)

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

:::danger
注意:只需要部署下面其中一个,推荐Calico。
Calico是一个纯三层的数据中心网络方案,Calico支持广泛的平台,包括Kubernetes、OpenStack等。
Calico 在每一个计算节点利用 Linux Kernel 实现了一个高效的虚拟路由器( vRouter) 来负责数据转发,而每个 vRouter 通过 BGP 协议负责把自己上运行的 workload 的路由信息向整个 Calico 网络内传播。

此外,Calico 项目还实现了 Kubernetes 网络策略,提供ACL功能。
https://docs.projectcalico.org/getting-started/kubernetes/quickstart
K8S版本和calico 版本的对应关系
参考链接:https://blog.csdn.net/qq_32596527/article/details/127692734
:::

$ wget https://projectcalico.docs.tigera.io/archive/v3.21/manifests/calico.yaml

下载完后还需要修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init指定的
--pod-network-cidr=10.244.0.0/16 保持一致; 默认是 192.168.0.0/16

- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"

修改完后应用清单:

$ kubectl apply -f calico.yaml
$ kubectl get pods -n kube-system

查看所有节点,以及pod,svc情况

$ kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   70m   v1.20.15
k8s-master02   Ready    control-plane,master   64m   v1.20.15
k8s-master03   Ready    control-plane,master   62m   v1.20.15
k8s-node01     Ready    <none>                 46m   v1.20.15
k8s-node02     Ready    <none>                 46m   v1.20.15
k8s-node03     Ready    <none>                 46m   v1.20.15
[root@openeuler ~]# kubectl get all -A
NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE
default       pod/my-web-6b6c9df775-2p67z                    1/1     Running   0          45m
default       pod/my-web-6b6c9df775-76f8w                    1/1     Running   0          45m
default       pod/my-web-6b6c9df775-t2zrm                    1/1     Running   0          45m
default       pod/my-web-6b6c9df775-zs6db                    1/1     Running   0          45m
kube-system   pod/calico-kube-controllers-5bb48c55fd-vvktf   1/1     Running   0          52m
kube-system   pod/calico-node-7ng4l                          1/1     Running   0          52m
kube-system   pod/calico-node-lrf24                          1/1     Running   0          47m
kube-system   pod/calico-node-qxnxg                          1/1     Running   0          52m
kube-system   pod/calico-node-vwjt4                          1/1     Running   0          52m
kube-system   pod/coredns-7f89b7bc75-cc6gn                   1/1     Running   0          70m
kube-system   pod/coredns-7f89b7bc75-wsg29                   1/1     Running   0          70m
kube-system   pod/etcd-k8s-master01                          1/1     Running   0          70m
kube-system   pod/etcd-k8s-master02                          1/1     Running   0          65m
kube-system   pod/etcd-k8s-master03                          1/1     Running   0          62m
kube-system   pod/kube-apiserver-k8s-master01                1/1     Running   0          70m
kube-system   pod/kube-apiserver-k8s-master02                1/1     Running   0          65m
kube-system   pod/kube-apiserver-k8s-master03                1/1     Running   0          62m
kube-system   pod/kube-controller-manager-k8s-master01       1/1     Running   1          70m
kube-system   pod/kube-controller-manager-k8s-master02       1/1     Running   0          65m
kube-system   pod/kube-controller-manager-k8s-master03       1/1     Running   0          62m
kube-system   pod/kube-proxy-ck9lr                           1/1     Running   0          63m
kube-system   pod/kube-proxy-vzrs8                           1/1     Running   0          65m
kube-system   pod/kube-proxy-wxb78                           1/1     Running   0          70m
kube-system   pod/kube-proxy-z5chd                           1/1     Running   0          47m
kube-system   pod/kube-scheduler-k8s-master01                1/1     Running   1          70m
kube-system   pod/kube-scheduler-k8s-master02                1/1     Running   0          65m
kube-system   pod/kube-scheduler-k8s-master03                1/1     Running   0          62m

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  70m
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   70m

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   4         4         4       4            4           kubernetes.io/os=linux   52m
kube-system   daemonset.apps/kube-proxy    4         4         4       4            4           kubernetes.io/os=linux   70m

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
default       deployment.apps/my-web                    4/4     4            4           45m
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           52m
kube-system   deployment.apps/coredns                   2/2     2            2           70m

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE
default       replicaset.apps/my-web-6b6c9df775                    4         4         4       45m
kube-system   replicaset.apps/calico-kube-controllers-5bb48c55fd   1         1         1       52m
kube-system   replicaset.apps/coredns-7f89b7bc75                   2         2         2       70m

7. 测试kubernetes集群

  • 验证Pod工作
  • 验证Pod网络通信
  • 验证DNS解析

在Kubernetes集群中创建一个pod,验证是否正常运行:

$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc

访问地址:http://NodeIP:Port

8. 部署 Dashboard[可选]

$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:

$ vi recommended.yaml
...
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort
...

$ kubectl apply -f recommended.yaml
$ kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-6b4884c9d5-gl8nr   1/1     Running   0          13m
kubernetes-dashboard-7f99b75bf4-89cds        1/1     Running   0          13m

访问地址:https://NodeIP:30001

创建service account并绑定默认cluster-admin管理员集群角色:

# 创建用户
$ kubectl create serviceaccount dashboard-admin -n kube-system
# 用户授权
$ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
# 获取用户Token
$ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

使用输出的token登录Dashboard。

9. 补充:

9.1 切换容器引擎为Containerd

https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/#containerd

1、配置先决条件

$ cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

$ sudo modprobe overlay
$ sudo modprobe br_netfilter

# 设置必需的 sysctl 参数,这些参数在重新启动后仍然存在。
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

# Apply sysctl params without reboot
$ sudo sysctl --system

2、安装containerd

$ yum install -y yum-utils device-mapper-persistent-data lvm2
$ yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
$ yum update -y && sudo yum install -y containerd.io
$ mkdir -p /etc/containerd
$ containerd config default | sudo tee /etc/containerd/config.toml
$ systemctl restart containerd

3、修改配置文件

$ vim /etc/containerd/config.toml
   [plugins."io.containerd.grpc.v1.cri"]
      sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.2"  
         ...
         [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
             SystemdCgroup = true
             ...
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://b9pmyelo.mirror.aliyuncs.com"]
          
$ systemctl restart containerd

4、配置kubelet使用containerd

$ vi /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --cgroup-driver=systemd

systemctl restart kubelet

5、验证

$ kubectl get node -o wide

$ k8s-node1  xxx  containerd://1.4.4

10.其他补充

kubectl 命令补齐:

$ yum install bash-completion -y
$ source /usr/share/bash-completion/bash_completion
$ source <(kubectl completion bash)
$ kubectl completion bash >/etc/bash_completion.d/kubectl