Re: [bug report] shared tags causes IO hang and performance drop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 27/04/2021 10:52, Ming Lei wrote:
BTW, for the performance issue which Yanhui witnessed with megaraid sas, do
you think it may because of the IO sched tags issue of total sched tag depth
growing vs driver tags?
I think it is highly possible. Will you work a patch to convert to
per-request-queue sched tag?


Sure, I'm just hacking now to see what difference it can make to performance. Early results look promising...

Are there lots of LUNs? I can imagine that megaraid
sas has much larger can_queue than scsi_debug:)
No, there are just two LUNs, the 1st LUN is one commodity SSD(queue
depth is 32) and the performance issue is reported on this LUN, another is one
HDD(queue depth is 256) which is root disk, but the megaraid host tag depth is
228, another weird setting. But the issue still can be reproduced after we set
2nd LUN's depth as 64 for avoiding driver tag contention.



BTW, one more thing which Kashyap and I looked at when initially developing the hostwide tag support was the wait struct usage in tag exhaustion scenario:

https://lore.kernel.org/linux-block/ecaeccf029c6fe377ebd4f30f04df9f1@xxxxxxxxxxxxxx/

IIRC, we looked at a "hostwide" wait_index - it didn't seem to make a difference then, and we didn't end up make any changes here, but still worth remembering.

Thanks,
John



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux