To get the validate nvme driver performance data, please refer https://www.snia.org/forums/cmsi/programs/twg
6 支持 iops(k) | bandwidth(MB) | latency_avg(ms) | latency_90% | latency_95% | latency_99% | latency_99.9% 数据统计
usage: xnvme_fio_bench.py [-h] [-r] [-l] [-p] [-j JOB_ID] [-g GROUP_ID] [-t RUNTIME_OF_EACH_JOB] [-d DESTINATION] [-s SIZE]
optional arguments: -h, --help show this help message and exit -r, --type info -l, --listjobs destination of jobs, cannot be empty -p, --printjobs print job -j JOB_ID, --job_id JOB_ID job id -g GROUP_ID, --group_id GROUP_ID group id, cannot be empty -t RUNTIME_OF_EACH_JOB, --runtime_of_each_job RUNTIME_OF_EACH_JOB run time of each job -d DESTINATION, --destination DESTINATION filename parameter in fio, for NVME it is a device like /dev/nvme0n1 -s SIZE, --size SIZE size parameter in fio,like 100M or 10G or 1T
usage: plot_fio_bw_iops_log.py [-h] [-f FILE] [-l LABLE] [-t TYPE]
xnvme fio bench plot tool
optional arguments: -h, --help show this help message and exit -f FILE, --file FILE bw/iops/latency file -l LABLE, --lable LABLE lable -t TYPE, --type TYPE file log type bw/io/la
运行时将xnvme_fio_bench.py,plot_fio_bw_iops_log.py以及fio配置文件放在同一个目录下
运行./xnvme_fio_bench.py -j fio_seq_read -t 10
例如:
root@Z690:/home/xilinx/Documents/minx/xnvme_fio_testbench_upstream/example# ./xnvme_fio_bench.py -j fio_seq_read -t 10 CPU: 12th Gen Intel(R) Core(TM) i7-12700K X86_64 64bit 20core 1670131000Hz 内存: 31.13799285888672G 系统: Linux Z690 5.4.0-122-generic #138~18.04.1-Ubuntu SMP Fri Jun 24 14:14:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 工具: fio-3.1 开始执行用例fio_seq_read,设置运行10秒
2022年 07月 20日 星期三 17:12:03 CST Start to process fio_seq_read ========= fio spend 0.17587899764378864 mins======= ========= run time: Wed Jul 20 17:12:14 2022======= ========= result of fio_seq_read ========= | type | iops(k) | bandwidth(MB) | latency_avg(us) | latency_90% | latency_95% | latency_99% | latency_99.9% | iodepth | blk size | device | |-----------+-----------+-----------------+-------------------+---------------+---------------+---------------+-----------------+-----------+------------+---------------| | read_0_0 | 44.6013 | 174.224 | 179.023 | 284.672 | 333.824 | 452.608 | 602.112 | 8 | 4k | /dev/nvme0n1 | | read_1_0 | 43.5875 | 170.264 | 183.176 | 288.768 | 346.112 | 468.992 | 618.496 | 8 | 4k | /dev/nvme1n1 | | read_2_0 | 44.7506 | 174.807 | 178.416 | 280.576 | 329.728 | 448.512 | 593.92 | 8 | 4k | /dev/nvme2n1 | | read_3_0 | 44.7366 | 174.752 | 178.448 | 280.576 | 333.824 | 452.608 | 602.112 | 8 | 4k | /dev/nvme3n1 | | read_4_0 | 44.4205 | 173.517 | 179.741 | 280.576 | 329.728 | 452.608 | 593.92 | 8 | 4k | /dev/nvme4n1 | | read_5_0 | 44.435 | 173.573 | 179.672 | 280.576 | 333.824 | 452.608 | 593.92 | 8 | 4k | /dev/nvme5n1 | | read_6_0 | 44.5282 | 173.938 | 179.314 | 276.48 | 329.728 | 444.416 | 585.728 | 8 | 4k | /dev/nvme6n1 | | read_7_0 | 44.681 | 174.535 | 178.68 | 280.576 | 329.728 | 452.608 | 593.92 | 8 | 4k | /dev/nvme7n1 | | read_8_0 | 44.5009 | 173.831 | 179.415 | 284.672 | 333.824 | 448.512 | 593.92 | 8 | 4k | /dev/nvme8n1 | | read_9_0 | 44.4577 | 173.662 | 179.569 | 280.576 | 329.728 | 448.512 | 593.92 | 8 | 4k | /dev/nvme9n1 | | read_10_0 | 44.0236 | 171.967 | 181.356 | 288.768 | 342.016 | 464.896 | 618.496 | 8 | 4k | /dev/nvme10n1 | | read_11_0 | 44.0287 | 171.986 | 181.362 | 288.768 | 342.016 | 460.8 | 602.112 | 8 | 4k | /dev/nvme11n1 | | read_12_0 | 43.849 | 171.285 | 182.094 | 288.768 | 342.016 | 464.896 | 610.304 | 8 | 4k | /dev/nvme12n1 | | read_13_0 | 43.9812 | 171.801 | 181.532 | 288.768 | 342.016 | 464.896 | 602.112 | 8 | 4k | /dev/nvme13n1 | | read_14_0 | 43.6048 | 170.331 | 183.091 | 288.768 | 346.112 | 468.992 | 618.496 | 8 | 4k | /dev/nvme14n1 | | read_15_0 | 43.7559 | 170.921 | 182.454 | 288.768 | 346.112 | 468.992 | 610.304 | 8 | 4k | /dev/nvme15n1 | | read_16_0 | 43.9041 | 171.5 | 181.848 | 288.768 | 342.016 | 464.896 | 610.304 | 8 | 4k | /dev/nvme16n1 | | read_17_0 | 43.8321 | 171.219 | 182.15 | 288.768 | 342.016 | 464.896 | 618.496 | 8 | 4k | /dev/nvme17n1 |
运行结束,回生成data 和log两个文件夹
data下记录了fio的输出统计结果, log文件夹下记录了bw, iops, latency 的中间采集数据,可以利用plot_fio_bw_iops_log.py 对数据进行绘图
支持FIO case 分组运行
在目录下创建group文件夹,将要分组运行的fio配置文件软连接到文件加内
执行 ./xnvme_fio_bench.py -g group 即可
bs=4k 单次io的块文件大小为4k,一般来说块大小为 512B 4K 16K .....1M、4M 这样的扇区大小(512字节)的倍数,小于16K的文件,一般算作小文件,大于16K的文件属于大文件
ioengine=libaio 负载引擎,发起异步IO读写请求,io引擎使用libaio引擎, 也可以指定其它方式,例如pync.libaio:Linux native asynchronous I/O. Note that Linux may only support queued behavior with non-buffered I/O (set direct=1 or buffered=0). This engine defines engine specific options.
randrepeat=1 对于随机IO负载,配置生成器的种子,使得路径是可以预估的,使得每次重复执行生成的序列是一样的。randrepeat:默认是True, 如果不设置randrepeat=0这个参数不会影响seqread,但会影响seqwrite,randwrite,randread.
%util: 采用周期内用于IO操作的时间比率,在统计时间内所有处理IO时间,除以总共统计时间。例如,如果统计间隔1秒,该设备有0.8秒在处理IO,而0.2秒闲置,那么该设备的%util = 0.8/1 = 80%,所以该参数表示了设备的繁忙程度
NAND-based SSS device controllers map Logical Addresses (LBA) to Physical Blocks Addresses (LBA) on the NAND media, in order to achieve the best NAND performance and endurance. The SSS device manages this LBA-to-PBA mapping with internal processes that operate independently of the host.
The sum of this activity is referred to as “flash management”.
The performance of the flash management during a test, and hence the overall performance of the SSS device during the test, depends critically on:
- Write History and Preconditioning: The state of the device prior to the test
- Workload Pattern: Pattern of the I/O (r/w mix, block size, etc.) written to device during test
- Data Pattern: The actual bits in the data payload written to the device
The methodologies defined in the SSS Performance Test Specification (SSS PTS) attempt to create consistent conditions for items 1-3 so that the only variable is the device under test.