k8s安装教程--纯享版

张开发
2026/4/12 9:50:53 15 分钟阅读

分享文章

k8s安装教程--纯享版
1 检查环境master/node# 检查时间同步问题 date rpm -q chronyd yum install -y chronyd # 检查防火墙 systemctl status firewalld systemctl stop firewalld systemctl disable firewalld iptables -vnL # 关停swap swapoff -a 永久关停 Vi /etc/fstab 注释掉swap # 禁用SELinux 永久 vim /etc/sysconfig/selinux SELINUXdisabled 临时 禁用SELinux,让容器可以读取主机文件系统 setenforce 0 # 启用 IPv4 数据包转发调整br_netfilter内核参数调整命名空间内核参数 cat EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables 1 net.bridge.bridge-nf-call-iptables 1 net.ipv4.ip_forward 1 user.max_user_namespaces28633 EOF sudo sysctl --system2 containerd 安装master/node# 安装 yum install -y yum-utils yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install containerd containerd config default /etc/containerd/config.toml # 修改配置文件 vim /etc/containerd/config.toml 原来的是 sandbox_image registry.k8s.io/pause:3.6 改成 sandbox_image registry.aliyuncs.com/google_containers/pause:3.10 systemctl start containerd vi /etc/crictl.yaml runtime-endpoint: unix:///var/run/containerd/containerd.sock image-endpoint: unix:///var/run/containerd/containerd.sock timeout: 10 debug: false3 组件安装master/node# 安装镜像仓库 cat EOF | tee /etc/yum.repos.d/kubernetes.repo [kubernetes] nameKubernetes baseurlhttps://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/rpm/ enabled1 gpgcheck1 gpgkeyhttps://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/rpm/repodata/repomd.xml.key EOF # 仓库更新 yum clean all yum makecache # 安装对应版本的kubelet kubeadm kubectl yum install -y kubelet kubeadm kubectl systemctl enable kubelet # 列出k8s需要的镜像版本 kubeadm config images list --image-repository registry.aliyuncs.com/google_containers # 拉取以上镜像 crictl pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.31.14 crictl pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.31.14 crictl pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.31.14 crictl pull registry.aliyuncs.com/google_containers/kube-proxy:v1.31.14 crictl pull registry.aliyuncs.com/google_containers/coredns:v1.11.3 crictl pull registry.aliyuncs.com/google_containers/pause:3.10 crictl pull registry.aliyuncs.com/google_containers/etcd:3.5.24-04 集群初始化masterkubeadm init \ --apiserver-advertise-address192.168.126.130 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.31.14 \ --service-cidr11.1.0.0/16 \ --pod-network-cidr10.244.0.0/16 \ --control-plane-endpoint m 完成后执行 mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config5 配置calicomaster# 拉去镜像 ctr -n k8s.io image pull docker.m.daocloud.io/calico/pod2daemon-flexvol:v3.28.0 ctr -n k8s.io image pull docker.m.daocloud.io/calico/typha:v3.28.0 ctr -n k8s.io image pull docker.m.daocloud.io/calico/kube-controllers:v3.28.0 ctr -n k8s.io image pull docker.m.daocloud.io/calico/apiserver:v3.28.0 ctr -n k8s.io image pull docker.m.daocloud.io/calico/csi:v3.28.0 ctr -n k8s.io image pull docker.m.daocloud.io/calico/cni:v3.28.0 ctr -n k8s.io image pull docker.m.daocloud.io/calico/node:v3.28.0 ctr -n k8s.io image pull docker.m.daocloud.io/calico/node-driver-registrar:v3.28.0 #打标签 ctr -n k8s.io image tag docker.m.daocloud.io/calico/pod2daemon-flexvol:v3.28.0 docker.io/calico/pod2daemon-flexvol:v3.28.0 ctr -n k8s.io image tag docker.m.daocloud.io/calico/typha:v3.28.0 docker.io/calico/typha:v3.28.0 ctr -n k8s.io image tag docker.m.daocloud.io/calico/kube-controllers:v3.28.0 docker.io/calico/kube-controllers:v3.28.0 ctr -n k8s.io image tag docker.m.daocloud.io/calico/apiserver:v3.28.0 docker.io/calico/apiserver:v3.28.0 ctr -n k8s.io image tag docker.m.daocloud.io/calico/csi:v3.28.0 docker.io/calico/csi:v3.28.0 ctr -n k8s.io image tag docker.m.daocloud.io/calico/cni:v3.28.0 docker.io/calico/cni:v3.28.0 ctr -n k8s.io image tag docker.m.daocloud.io/calico/node:v3.28.0 docker.io/calico/node:v3.28.0 ctr -n k8s.io image tag docker.m.daocloud.io/calico/node-driver-registrar:v3.28.0 docker.io/calico/node-driver-registrar:v3.28.0 # 下载配置文件 Tigera Operator wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml 自定义资源 wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml vim custom-resources.yaml 编辑配置文件修改pod cidr cidr: 10.244.0.0/16 安装 kubectl create -f tigera-operator.yaml kubectl create -f custom-resources.yaml 检查安装 kubectl get tigerastatus注node集群需要拉取部分镜像进行使用6 集群状态检查master可以看到节点状态是ready命名空间已创建pod已全部运行[rootm install-calico]# kubectl get namespace NAME STATUS AGE calico-apiserver Active 21m calico-system Active 16h default Active 17h kube-node-lease Active 17h kube-public Active 17h kube-system Active 17h tigera-operator Active 16h [rootm install-calico]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE calico-apiserver calico-apiserver-5898cb4fdf-ckp7c 1/1 Running 0 21m calico-apiserver calico-apiserver-5898cb4fdf-nd2hg 1/1 Running 0 21m calico-system calico-kube-controllers-67cbcd8f6-rfhwn 1/1 Running 0 16h calico-system calico-node-pjd44 1/1 Running 0 16h calico-system calico-typha-757d4d64bd-qn4qk 1/1 Running 0 16h calico-system csi-node-driver-gtbq4 2/2 Running 0 16h kube-system coredns-855c4dd65d-5dfd8 1/1 Running 0 17h kube-system coredns-855c4dd65d-pjjzn 1/1 Running 0 17h kube-system etcd-m 1/1 Running 0 17h kube-system kube-apiserver-m 1/1 Running 0 17h kube-system kube-controller-manager-m 1/1 Running 1 (15h ago) 17h kube-system kube-proxy-c4grr 1/1 Running 0 17h kube-system kube-scheduler-m 1/1 Running 1 (15h ago) 17h tigera-operator tigera-operator-6847585ccf-bkz8n 1/1 Running 1 (15h ago) 16h [rootm install-calico]# kubectl get node NAME STATUS ROLES AGE VERSION m Ready control-plane 17h v1.31.147 后期节点加入node# 默认token在24小时后过期后续的node要加入集群需要在master节点上用下面的命令重新生成token。 kubeadm token create kubeadm token list # 加入集群 kubeadm join --token (填写自己集群的信息) k8-m1:6443 --discovery-token-ca-cert-hash sha256:(填写自己集群的信息)

更多文章