Re: About scsi device queue depth

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



sdev qdepth	fio number jobs* 	1	10	20
16					1590	1654	1660
32					1545	1646	1654
64					1436	1085	1070
254 (default)				1436	1070	1050
What does the performance number mean? IOPS or others? What is the fio
io test? random IO or sequential IO?
So those figures are x1K IOPs read performance; so 1590, above, is 1.59M
IOPs read. Here's the fio script:

[global]
rw=read
direct=1
ioengine=libaio
iodepth=40
numjobs=20
bs=4k
;size=10240000m
;zero_buffers=1
group_reporting=1
;ioscheduler=noop
;cpumask=0xffe
;cpus_allowed=1-47
;gtod_reduce=1
;iodepth_batch=2
;iodepth_batch_complete=2
runtime=60
;thread
loops = 10000
Is there any effect on random read IOPS when you decrease sdev queue
depth? For sequential IO, IO merge can be enhanced by that way.


Hi Ming,

fio randread results:

fio queue depth 40
sdev qdepth	fio number jobs* 	1	10	20
8					1308K	831K	814K
16					1435K	1073K	988K
32					1438K	1065K	990K
64					1432K	1061K	1020K
254 (default)				1439K	1099K	1083K


fio queue depth 128
sdev qdepth	fio number jobs* 	1	10	20
8					1310K	860K	849K
16					1435K	1048K	958K
32					1438K	1140K	951K
64					1438K	1065K	953k
254 (default)				1439K	1140K	1056K

So randread goes in the opposite direction (to read wrt queue depth).

Thanks,
John



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux