1 前言
在 《国内部署Kubernetes集群1.22.1》 一文中,我们曾通过手动pull镜像的方式,搭建了Kubernetes集群,它存在一些问题:
- 所有镜像都需要提前拉到本地,非常繁琐
- docker镜像执行效率堪忧,且存在一些兼容性问题
本文,我们将使用containerd搭建Kubernets集群,并配置自定义容器镜像,不再需要本地提前拉取了!
需要准备的机器:
- 3台机器
- 我这里使用的阿里云的,CentOS 7.X
- 假设主机是host1 ~ host3
2 调整操作系统 / 内核参数
在3台机器都要执行!!!
切换为iptables
yum install -y curl iptables iptables-services ntp ipvsadm yum -y remove firewalld systemctl enable iptables systemctl stop iptables
清空iptables
iptables -F
关闭swap
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#/g' /etc/fstab
关闭selinux
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
调整时区
timedatectl set-timezone Asia/Shanghai timedatectl set-local-rtc 0 systemctl restart rsyslog systemctl restart crond
开启内核模块
modprobe br_netfilter modprobe ip_vs modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh modprobe nf_conntrack
开机启动内核模块
vim /etc/modules-load.d/k8s.conf br_netfilter ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack
内核参数
cat > /etc/sysctl.d/kubernetes.conf <<EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 vm.swappiness=0 vm.panic_on_oom=0 fs.inotify.max_user_instances=8192 fs.inotify.max_user_watches=1048576 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 net.netfilter.nf_conntrack_max=2310720 EOF sysctl -p /etc/sysctl.d/kubernetes.conf
3 安装containerd
在3台机器都要执行!!!
containerd是由Docker发起的另一款容器引擎,更加适合k8s的环境,更加轻量级,官方更推荐使用它(重要的是,支持配置代理)。
安装containerd及依赖
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install -y containerd yum-utils device-mapper-persistent-data lvm2
配置
containerd config default > /etc/containerd/config.toml
(可选)修改配置,替换为你自己的镜像仓库
# /etc/containerd/config.toml sandbox_image = "k8s-gcr.xxx.com/pause:3.2"
(可选)修改镜像仓库代理(建议自己搭建代理,参考搭建容器仓库的镜像服务器 一文),注意替换下面的xxx到你的镜像。
# /etc/containerd/config.toml [plugins."io.containerd.grpc.v1.cri".registry] [plugins."io.containerd.grpc.v1.cri".registry.mirrors] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] endpoint = ["https://registry.cn-hangzhou.aliyuncs.com"] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"] endpoint = ["https://k8s-gcr.xxx.com"] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] endpoint = ["https://gcr.xxx.com"] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."ghcr.io"] endpoint = ["https://ghcr.xxx.com"] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] endpoint = ["https://quay.xxx.com"]
重启
systemctl daemon-reload systemctl enable containerd systemctl restart containerd
安装crictl,从这里下载最新版,这个其实是替代docker二进制命令的,自己用,k8s不用这个
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.22.0/crictl-v1.22.0-linux-amd64.tar.gz tar zxvf crictl-v1.22.0-linux-amd64.tar.gz -C /usr/local/bin
配置crictl
cat >/etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF
测试一下:
crictl ps CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
crictl pull k8s.gcr.io/kube-apiserver:v1.21.3 Image is up to date for sha256:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80
crictl pull k8s.gcr.io/kube-apiserver:v1.21.3 Image is up to date for sha256:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80 [root@host001 ~]# crictl images IMAGE TAG IMAGE ID SIZE k8s.gcr.io/kube-apiserver v1.21.3 3d174f00aa39e 30.5MB
4 安装kubeadm套件
3台机器都要执行!
删除旧版本
yum remove -y kubeadm kubectl kubelet kubernetes-cni cri-tools socat
安装
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF sudo yum install -y kubeadm
修改kubelet配置
# /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf # 添加1行 Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
启用kubelet服务
systemctl enable kubelet
5 初始化主节点
只在node1执行,假设IP为172.20.3.84
kubeadm init --kubernetes-version v1.23.0 --apiserver-advertise-address=172.20.3.84 --pod-network-cidr=10.6.0.0/16
执行成功后,牢记输出:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.20.3.84:6443 --token 78d4sx.xb84qtp73g8ftfoe \ --discovery-token-ca-cert-hash sha256:d3b13336cbec9a702ad0f78bd7e310fc3d28b01f2cb786bc2746547656a8776c
6 初始化其他节点
kubeadm join 172.20.3.84:6443 --token 78d4sx.xb84qtp73g8ftfoe --discovery-token-ca-cert-hash sha256:d3b13336cbec9a702ad0f78bd7e310fc3d28b01f2cb786bc2746547656a8776c
7 添加网络插件
在node1执行
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # 修改cidr匹配后 kubectl apply -f ./kube-flannel.yml
8 测试
kubectl get nodes NAME STATUS ROLES AGE VERSION host001 Ready control-plane,master 94s v1.23.0 host002 Ready <none> 55s v1.23.0 host003 Ready <none> 52s v1.23.0
创建一个pod
kubectl run nginx --image=nginx
看IP
kubectl describe pod nginx | grep IP
访问,成功!
curl "10.6.2.2" <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>