> On 12/01/2021 09:06, Ming Lei wrote: > >> OPs read. Here's the fio script: > >> > >> [global] > >> rw=read > >> direct=1 > >> ioengine=libaio > >> iodepth=40 > >> numjobs=20 > >> bs=4k > >> ;size=10240000m > >> ;zero_buffers=1 > >> group_reporting=1 > >> ;ioscheduler=noop > >> ;cpumask=0xffe > >> ;cpus_allowed=1-47 > >> ;gtod_reduce=1 > >> ;iodepth_batch=2 > >> ;iodepth_batch_complete=2 > >> runtime=60 > >> ;thread > >> loops = 10000 > > Is there any effect on random read IOPS when you decrease sdev queue > > depth? For sequential IO, IO merge can be enhanced by that way. > > > > Let me check... John - Can you check your test with rq_affinity = 2 and nomerges=2 ? I have noticed similar drops (whatever you have reported) - "once we reach near pick of sdev or host queue depth, sometimes performance drop due to contention." But this behavior keep changing since kernel changes in this area is very actively happened in past. So I don't know the exact details about kernel version etc. I have similar setup (16 SSDs) and I will try similar test on latest kernel. BTW - I remember that rq_affinity=2 play major role in such issue. I usually do testing with rq_affinity = 2. > > Thanks, > John -- This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature