TiDB重启维护各节点主机-业务零感知

TiDB维护各节点服务主机-业务零感知

TiDB由TiDB、PD、TiKV三个节点组成,每个节点都是一套高可用服务。

下面主要总结下在一个TiDB集群下,尤其是生产环境,对业务灵感知进行集群服务器维护,如升级磁盘、磁盘扩容、数据迁移、网络升级、服务器重启等

主要方法和注意点:

维护TiDB节点:

通过负载均衡层(SLB或HAProxy等)调整待维护节点权重,节点恢复后恢复权重或角色

维护PD节点:

移动member leader节点,删除待处理PD节点,恢复后原清理缓存,重新加入集群

维护TiKV节点:

先将待处理TiKV节点leader权重调为0,并添加任务将其上leader调度到其他节点上。此时可以安全的直接停止TiKV服务,维护后启动,调回权重,删除调度任务即可。

(KV节点尽量在一个小时内操作完成,因为默认超过一个小时会在其他TiKV节点生成数据副本,可能造成主机负载过高,甚至影响当前业务读写性能)

案例:

以下为一个案例:不添加机器的情况下,将集群现有机器上磁盘由普通磁盘升级到SSD,涉及到所有服务的数据文件迁移,和服务重启。整个过程,业务零感知。

具体步骤参考

1.挂载SSD新盘,格式化,创建lvm,挂载到/data2

2.初次拷贝原数据目录至新盘(主要考虑到停服后再拷贝耗时较长,先拷贝,停服后再增量拷贝即可。也可以用rsync)

mkdir -p /data2/tidb/deploy/data

chown -R tidb.tidb /data2

cp -R -a /data/tidb/deploy/data /data2/tidb/deploy/data

3.处理PD节点:

cd /data/tidb/tidb-ansible/resources/bin/

./pd-ctl -u http://192.168.11.2:2379

查看pd信息:

member

member leader show

如果待处理pd为leader,可以重新指定leader:

member leader transfer pd2

删除pd节点:

member delete name pd1

4.处理KV节点:将待处理store 如5,调低权重,并将leader调度走

store weight 5 0 1

scheduler add evict-leader-scheduler 5

5.停止主机上相关服务,如有的话

注意顺序:

systemctl status pd.service

systemctl stop pd.service

systemctl status tikv-20160.service

systemctl stop tikv-20160.service

检查服务及数据

curl http://192.168.11.2:2379/pd/api/v1/stores

关闭tidb前,记得先调整SLB或HAproxy配置,使其无流量进入

systemctl status tidb-4000.service

systemctl stop tidb-4000.service

systemctl status grafana.service

systemctl stop grafana.service

systemctl status prometheus.service

systemctl stop prometheus.service

systemctl status pushgateway.service

systemctl stop pushgateway.service

systemctl status node_exporter.service

systemctl stop node_exporter.service

netstat -lntp

6.增量拷贝数据:

time \cp -R -a -u -f /data/* /data2/

7.umount原data,data2:

fuser -cu /data

ps -ef|grep data

umount /dev/vg01/lv01

umount /dev/vg02/lv02

挂载新盘到data:

mount /dev/vg02/lv02 /data

df -h

ls -lrt /data

8.启动相关服务

systemctl status node_exporter.service

systemctl start node_exporter.service

systemctl status pushgateway.service

systemctl start pushgateway.service

systemctl status prometheus.service

systemctl start prometheus.service

重新加入pd,需要先清理缓存:

rm -rf /data/tidb/deploy/data.pd/member/

vi /data/tidb/deploy/scripts/run_pd.sh

删除initial行,加入join:

–join=”http://192.168.11.2:2379″ \

systemctl status pd.service

systemctl start pd.service

systemctl status tikv-20160.service

systemctl start tikv-20160.service

curl http://192.168.11.2:2379/pd/api/v1/stores

systemctl status tidb-4000.service

systemctl start tidb-4000.service

systemctl status grafana.service

systemctl start grafana.service

netstat -lntp

复原权重,并删除清除leader任务

cd /data/tidb/tidb-ansible/resources/bin/

./pd-ctl -u http://192.168.11.2:2379

store weight 5 1 1

scheduler show

scheduler remove evict-leader-scheduler-5

观察leader_count和kv日志,稍等片刻,会自动balance-leader

9.修改 挂载dev

vi /etc/fstab

10.复原pd配置,主要为了恢复配置完整性,方便以后集群维护:

vi /data/tidb/deploy/scripts/run_pd.sh

删除 join行,添加原initial行:

–initial-cluster=”pd1=http://192.168.11.1:2380,pd2=http://192.168.11.2:2380,pd3=http://192.168.11.3:2380″ \

    原文作者:blank_song
    原文地址: https://www.jianshu.com/p/e479d8d7ebfd
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞