1.环境说明
主机名 | ip | 说明 |
---|---|---|
master-123(复用node) | 192.168.116.123 | etcd flannel kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy |
node-124 | 192.168.116.124 | flannel kubelet kube-proxy |
由于目前只有2台机器,etcd集群由于竞选原因至少需要奇数台机器才能稳定运行,所以目前暂时使用1台机器安装etcd。
2.初始化环境(修改主机名)
192.168.116.123执行:hostnamectl –static set-hostname master-123
192.168.116.124执行:hostnamectl –static set-hostname node-124
编辑 /etc/hosts文件,配置hostname通信
vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.116.123 master-123
192.168.116.124 node-124
3.使用cfssl创建CA证书并分发证书
3.1 安装cfssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
sudo mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
3.2 CA证书配置
配置config.json文件
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
配置csr.json文件
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "k8s",
"OU": "System"
}
]
}
3.3 生成CA证书以及私钥
mkdir –p /opt/ssl
cd /opt/ssl
执行:cfssl gencert -initca csr.json | cfssljson -bare ca
[root@localhost ssl]# ls -ltr
total 20
-rw-r--r--. 1 root root 387 Jul 27 15:01 config.json
-rw-r--r--. 1 root root 267 Jul 27 15:04 csr.json
-rw-r--r--. 1 root root 1363 Jul 27 15:07 ca.pem
-rw-------. 1 root root 1675 Jul 27 15:07 ca-key.pem
-rw-r--r--. 1 root root 1005 Jul 27 15:07 ca.csr
3.4 分发证书
创建证书目录
mkdir -p /etc/kubernetes/ssl
拷贝所有文件至目录下
cp * /etc/kubernetes/ssl/
将文件拷贝至所有k8s机器上
scp * root@192.168.116.124:/etc/kubernetes/ssl/
4.安装etcd并配置CA认证
etcd作为一个高可用键值存储系统,天生就是为集群化而设计的。由于Raft算法在做决策时需要多数节点的投票,所以etcd一般部署集群推荐奇数个节点,推荐的数量为3、5或者7个节点构成一个集群。
上传文件:etcd-3.1.7-1.el7.x86_64.rpm
执行命令:rpm -ivh etcd-3.1.7-1.el7.x86_64.rpm
下载地址:http://www.rpmfind.net/linux/…
4.1 安装etcd证书
现在只在单matser上创建etcd,之后etcd会添加2个节点
cd /opt/ssl
vi etcd-csr.json
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.116.123",
"192.168.116.124",
"192.168.116.120",
"192.168.116.123",
"192.168.116.124",
"192.168.116.125",
"192.168.116.120"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "k8s",
"OU": "System"
}
]
}
上面配置文件的ip尽量包括所有etcd节点的ip,否则需要重新分发证书
生成etcd密钥
cfssl gencert -ca=/opt/ssl/ca.pem \
-ca-key=/opt/ssl/ca-key.pem \
-config=/opt/ssl/config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
查看生成
[root@localhost ssl]# ls -ltr etcd*
-rw-r--r--. 1 root root 295 Jul 27 15:22 etcd-csr.json
-rw-r--r--. 1 root root 1440 Jul 27 15:24 etcd.pem
-rw-------. 1 root root 1679 Jul 27 15:24 etcd-key.pem
-rw-r--r--. 1 root root 1066 Jul 27 15:24 etcd.csr
拷贝到etcd服务器
cp etcd* /etc/kubernetes/ssl/
scp etcd* root@198.15.5.28:/etc/kubernetes/ssl
scp etcd* root@198.15.5.29:/etc/kubernetes/ssl
如果 etcd 非 root 用户,读取证书会提示没权限
在每一台ETCD节点上运行
chmod 644 /etc/kubernetes/ssl/etcd-key.pem
4.2 配置etcd服务
vi /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/usr/bin/etcd \
--name=etcd1 \
--cert-file=/etc/kubernetes/ssl/etcd.pem \
--key-file=/etc/kubernetes/ssl/etcd-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
--peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls=https://192.168.116.123:2380 \
--listen-peer-urls=https://192.168.116.123:2380 \
--listen-client-urls=https://192.168.116.123:2379,http://127.0.0.1:2379 \
--advertise-client-urls=https://192.168.116.123:2379 \
--initial-cluster-token=k8s-etcd-cluster \
--initial-cluster=etcd1=https://192.168.116.123:2380 \
--initial-cluster-state=new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
如果是多台etcd,应根据各节点ip的不同修改ip,–initial-cluster=etcd1=https://192.168.116.123:2380应该为所有节点而不是单个节点。
4.3 所有主机关闭防火墙
关闭所有节点主机防火墙
关闭防火墙开机自启动:systemctl disable firewalld
关闭防火墙: systemctl stop firewalld
启动etcd:
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
4.4 验证etcd集群状态
etcdctl --endpoints=https://192.168.116.123:2379 \
--cert-file=/etc/kubernetes/ssl/etcd.pem \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--key-file=/etc/kubernetes/ssl/etcd-key.pem \
cluster-health
etcdctl --endpoints=https://192.168.116.123:2379 \
--cert-file=/etc/kubernetes/ssl/etcd.pem \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--key-file=/etc/kubernetes/ssl/etcd-key.pem \
member list
晚上回去更新第二篇:
【从零开始安装kubernetes-1.7.3】2.flannel、docker以及Harbor的配置以及作用