kubeadm生产环境部署步骤
一、环境说明:
1,centos7
2,集群环境需要ntp时钟一致
3,getenforce 0
4,disable firewall
二、主机情况:
| k8s-master | k8s-node1 | k8s-node1 |
| ——– | ——– | ——– |
| 172.16.0.160 | 172.16.0.161 | 172.16.0.162 |
172.16.0.160 k8s-master
172.16.0.161 k8s-node1
172.16.0.162 k8s-node2
三、环境准备
1,yum 源配置
在master ,node1,node2主机都配置yum 仓库
kubernetes.repo:
[kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
docker-ce.repo
[docker-ce-stable] name=Docker CE Stable - $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable enabled=1 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-stable-debuginfo] name=Docker CE Stable - Debuginfo $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/stable enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-stable-source] name=Docker CE Stable - Sources baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/stable enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-edge] name=Docker CE Edge - $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/edge enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-edge-debuginfo] name=Docker CE Edge - Debuginfo $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/edge enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-edge-source] name=Docker CE Edge - Sources baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/edge enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-test] name=Docker CE Test - $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/test enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-test-debuginfo] name=Docker CE Test - Debuginfo $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/test enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-test-source] name=Docker CE Test - Sources baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/test enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-nightly] name=Docker CE Nightly - $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/nightly enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-nightly-debuginfo] name=Docker CE Nightly - Debuginfo $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/nightly enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-nightly-source] name=Docker CE Nightly - Sources baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/nightly enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
四、开始安装
1,master执行以下命令
[root@k8s-master ~]#yum install docker-ce kubectl kubeadm kubelet -y [root@k8s-master ~]# systemctl daemon-reload [root@k8s-master ~]# systemctl start docker
3,确保以下值都为1
[root@k8s-master ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables 1 [root@k8s-master ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables 1
4,开启自启动
[root@k8s-master ~]#systemctl enable docker [root@k8s-master ~]#systemctl enable kubelet
5,master 节点初始化
kubeadm init --kubernetes-version=v1.12.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
由于墙,大多数用户会报错
W0422 16:22:43.268820 11829 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.2 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.18.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.18.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.18.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.18.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) docker pull mirrorgooglecontainers/kube-apiserver:v1.18.2 , error: exit status 1 docker pull mirrorgooglecontainers/kube-apiserver:v1.18.2 [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.2: output: Error response from daemon: Get https://k8s.gcr.io/docker pull mirrorgooglecontainers/kube-apiserver:v1.18.2 v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) docker pull mirrorgooglecontainers/kube-apiserver:v1.18.2 , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.3-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.7: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
解决方案:
使用阿里云的镜像先拉取
https://blog.csdn.net/networken/article/details/84571373
[root@k8s-master ~]#vim k8s.sh
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 docker pull coredns/coredns:1.6.7 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2 k8s.gcr.io/kube-proxy:v1.18.2 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2 k8s.gcr.io/kube-scheduler:v1.18.2 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2 k8s.gcr.io/kube-apiserver:v1.18.2 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2 k8s.gcr.io/kube-controller-manager:v1.18.2 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0 docker tag coredns/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 docker rmi coredns/coredns:1.6.7
[root@k8s-master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.18.2 0d40868643c6 5 days ago 117MB k8s.gcr.io/kube-apiserver v1.18.2 6ed75ad404bd 5 days ago 173MB k8s.gcr.io/kube-controller-manager v1.18.2 ace0a8c17ba9 5 days ago 162MB k8s.gcr.io/kube-scheduler v1.18.2 a3099161e137 5 days ago 95.3MB k8s.gcr.io/pause 3.2 80d28bedfe5d 2 months ago 683kB k8s.gcr.io/coredns 1.6.7 67da37a9a360 2 months ago 43.8MB k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 5 months ago 288MB
重新初始化
[root@k8s-master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
不报错证明初始化成功
初始化的时候需要保存以下信息用于后续node节点加入master使用
Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.16.0.160:6443 --token xxxxxxxxxxx \ --discovery-token-ca-cert-hash sha256:xxxxxx
[root@k8s-master ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"}
说明master组件启动成功
继续在master开始安装flannel
[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 16h v1.18.2
[root@k8s-master ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-66bff467f8-fn8sx 1/1 Running 0 16h coredns-66bff467f8-k74fn 1/1 Running 0 16h etcd-k8s-master 1/1 Running 0 16h kube-apiserver-k8s-master 1/1 Running 0 16h kube-controller-manager-k8s-master 1/1 Running 0 16h kube-flannel-ds-amd64-jg7zw 1/1 Running 0 65s kube-proxy-n5rfj 1/1 Running 0 16h kube-scheduler-k8s-master 1/1 Running 0 16h
6,初始化node1 node2
首先确保node1 node2安装以下
yum install docker-ce kubeadm kubelet -y systemctl enable docker systemctl enable kubelet
7,将node1 ,node2 加入master集群
[root@k8s-node1 ~]# kubeadm join 172.16.0.160:6443 --token xxxx --discovery-token-ca-cert-hash sha256:xxxx W0423 10:23:35.439522 22002 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
证明加入成功 ,同时node2 也执行相同命令
此时验证
[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 17h v1.18.2 k8s-node1 NotReady <none> 5m44s v1.18.2 k8s-node2 NotReady <none> 25s v1.18.2
这时候显示not ready 说明node还完全成功加入,此时可能正在拉取镜像中,稍等几分钟进行查看
[root@k8s-node1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.18.2 0d40868643c6 6 days ago 117MB quay.io/coreos/flannel v0.12.0-amd64 4e9f801d2217 5 weeks ago 52.8MB k8s.gcr.io/pause 3.2 80d28bedfe5d 2 months ago
[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 17h v1.18.2 k8s-node1 Ready <none> 11m v1.18.2 k8s-node2 Ready <none> 5m56s v1.18.2
此时证明完全加入成功
环境安装完毕。