centos查看磁盘转速_Centos 检查磁盘读写性能

启动Tomcat发现deploy war的速度明显变慢, 怀疑磁盘出问题

测试写入

[tomcat@localhost ~]$ dd if=/dev/zero of=kwxgd bs=64k count=4k oflag=dsync4096+0 records in

4096+0records out268435456 bytes (268 MB) copied, 127.514 s, 2.1 MB/s

测试读取

dd if=kwxgd of=/dev/zero bs=64k count=4k iflag=direct

io状态

[tomcat@localhost ~]$ iostat -x 1Linux2.6.32-358.el6.x86_64 (localhost.localdomain) 2015年07月02日 _x86_64_(24CPU)

avg-cpu: %user %nice %system %iowait %steal %idle0.77 0.00 0.12 0.00 0.00 99.11Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util

sda0.01 55.48 0.03 2.42 3.20 463.18 190.22 0.04 14.74 0.69 0.17avg-cpu: %user %nice %system %iowait %steal %idle0.00 0.00 0.04 0.04 0.00 99.92Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util

sda0.00 0.00 1.00 0.00 16.00 0.00 16.00 0.01 11.00 11.00 1.10

Linux 2.6.32-358.el6.x86_64 (localhost.localdomain) 2015年07月02日 _x86_64_(24CPU)

avg-cpu: %user %nice %system %iowait %steal %idle0.77 0.00 0.12 0.00 0.00 99.11Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn

sda2.45 3.20 463.21 6662512 964056792

rrqm/s: 每秒进行 merge 的读操作数目。即 delta(rmerge)/s

wrqm/s:   每秒进行 merge 的写操作数目。即 delta(wmerge)/s

r/s:           每秒完成的读 I/O 设备次数。即 delta(rio)/s

w/s:       每秒完成的写 I/O 设备次数。即 delta(wio)/s

rsec/s: 每秒读扇区数。即 delta(rsect)/s

wsec/s:每秒写扇区数。即 delta(wsect)/s

rkB/s: 每秒读K字节数。是 rsect/s 的一半,因为每扇区大小为512字节。(需要计算)

wkB/s: 每秒写K字节数。是 wsect/s 的一半。(需要计算)

avgrq-sz:平均每次设备I/O操作的数据大小 (扇区)。delta(rsect+wsect)/delta(rio+wio)

avgqu-sz: 平均I/O队列长度。即 delta(aveq)/s/1000 (因为aveq的单位为毫秒)。

await: 平均每次设备I/O操作的等待时间 (毫秒)。即 delta(ruse+wuse)/delta(rio+wio)

svctm:平均每次设备I/O操作的服务时间 (毫秒)。即 delta(use)/delta(rio+wio)

%util:一秒中有百分之多少的时间用于 I/O 操作,或者说一秒中有多少时间 I/O 队列是非空的。即 delta(use)/s/1000 (因为use的单位为毫秒)

硬盘性能测试

一、安装hdparm

yum install hdparm -y

二、评估读取

SSD 硬盘,请使用hdparm命令进行读取测试。

hdparm -t /dev/xvda

SSH执行以上命令,可使用hdparm评估SSD的读取速率。

“/dev/xvda”指的是对应磁盘的驱动号,请执行“fdisk -l”查看

# 测试随机写IOPS,运行以下命令:

fio-direct=1 -iodepth=128 -rw=randwrite -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Rand_Write_Testing

# 测试随机读IOPS,运行以下命令:

fio-direct=1 -iodepth=128 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Rand_Read_Testing

# 测试顺序写吞吐量,运行以下命令:

fio-direct=1 -iodepth=64 -rw=write -ioengine=libaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Write_PPS_Testing

# 测试顺序读吞吐量,运行以下命令:

fio-direct=1 -iodepth=64 -rw=read -ioengine=libaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Read_PPS_Testing

下表以测试随机写IOPS的命令为例,说明命令中各种参数的含义。

-direct=1 表示测试时忽略I/O缓存,数据直写。-iodepth=128 表示使用AIO时,同时发出I/O数的上限为128。-rw=randwrite 表示测试时的读写策略为随机写(random writes)。作其它测试时可以设置为:

randread(随机读random reads)

read(顺序读sequential reads)write(顺序写sequential writes)

randrw(混合随机读写mixed random reads and writes)-ioengine=libaio 表示测试方式为libaio(Linux AIO,异步I/O)。应用程序使用I/O通常有两种方式:

同步

同步的I/O一次只能发出一个I/O请求,等待内核完成才返回。这样对于单个线程iodepth总是小于1,但是可以透过多个线程并发执行来解决。通常会用16−32根线程同时工作将iodepth塞满。

异步

异步的I/O通常使用libaio这样的方式一次提交一批I/O请求,然后等待一批的完成,减少交互的次数,会更有效率。-bs=4k

表示单次I/O的块文件大小为4 KB。未指定该参数时的默认大小也是4 KB。

测试IOPS时,建议将bs设置为一个比较小的值,如本示例中的4k。

测试吞吐量时,建议将bs设置为一个较大的值,如本示例中的1024k。-size=1G 表示测试文件大小为1 GiB。-numjobs=1表示测试线程数为1。-runtime=1000 表示测试时间为1000秒。如果未配置,则持续将前述-size指定大小的文件,以每次-bs值为分块大小写完。-group_reporting 表示测试结果里汇总每个进程的统计信息,而非以不同job汇总展示信息。-filename=iotest 指定测试文件的名称,比如iotest。测试裸盘可以获得真实的硬盘性能,但直接测试裸盘会破坏文件系统结构,请在测试前提前做好数据备份。-name=Rand_Write_Testing 表示测试任务名称为Rand_Write_Testing,可以随意设定。

关于fdisk磁盘信息

Command (m for help): p

Disk /dev/sdc: 465.8 GiB, 500107862016 bytes, 976773168 sectors

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 4096 bytes

I/O size (minimum/optimal): 4096 bytes / 33553920 bytes

Disklabel type: gpt

Disk identifier: 249AFED8-1A62-461E-9A07-23B686A34DF4

Command (m for help): n

Partition number (1-128, default 1):

First sector (34-976773134, default 2048):

关于logical sector size 和 physical sector size的说明:

The physical_block_size is the minimal size of a block the drive is able to write in an atomic operation.

The logical_block_size is the smallest size the drive is able to write (cf. the linux kernel documentation).

The logical sector size being smaller than the physical sector size is normal for most modern disks. This is simply how Advanced Format disks are most often implemented. Some external disks use the same (4096-byte) sector size for both physical and logical sectors, and I’ve heard that some high-end internal disks now do the same, but most disks these days are Advanced Format models with 512-byte logical sectors and 4096-byte physical sectors. There’s nothing you can (or should try to) do about this.

关于First sector default 2048的说明:

Most modern disk drives need a partition to be aligned on sector 2048 to avoid writes overlapping two sectors, but for a long time the sector 63 was used by the fdisk utility and distro’s installers by default. This can cause severe performance issues on modern disks. Often they will try to cover up for it in firmware, which means the issue will be still there, but the pain will be just low enough to not really find out whats wrong.

The best solution is to use either GPT/EFI partitions or switch to using LVM since partitions have been an outdated concept for many years now.

On modern distros like Ubuntu the fdisk utility is patched to default to 2048 sectors. You need to use sector 63? In fact it does not even allowed doing it wrong anymore

    原文作者:知乎萌宠
    原文地址: https://blog.csdn.net/weixin_33551941/article/details/113011708
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞