On 10/10/2019 11:30, Ming Lei wrote:
Yes, hisi_sas. So, right, it is single queue today on mainline, but I
> manually made it multiqueue on my dev branch just to test this series.
> Otherwise I could not test it for that driver.
>
> My dev branch is here, if interested:
> https://github.com/hisilicon/kernel-dev/commits/private-topic-sas-5.4-mq
Your conversion shouldn't work given you do not change .can_queue in the
patch of 'hisi_sas_v3: multiqueue support'.
Ah, I missed that, but I don't think that it will make a difference
really since I'm only using a single disk, so I don't think that
can_queue really comes into play. But....
As discussed before, tags of hisilicon V3 is HBA wide. If you switch
to real hw queue, each hw queue has to own its independent tags.
However, that isn't supported by V3 hardware.
I am generating the tag internally in the driver now, so that hostwide
tags issue should not be an issue.
And, to be clear, I am not paying too much attention to performance, but
rather just hotplugging while running IO.
An update on testing:
I did some scripted overnight testing. The script essentially loops like
this:
- online all CPUS
- run fio binded on a limited bunch of CPUs to cover a hctx mask for 1
minute
- offline those CPUs
- wait 1 minute (> SCSI or NVMe timeout)
- and repeat
SCSI is actually quite stable, but NVMe isn't. For NVMe I am finding
some fio processes never dying with IOPS @ 0. I don't see any NVMe
timeout reported. Did you do any NVMe testing of this sort?
Thanks,
John
See previous discussion:
https://marc.info/?t=155928863000001&r=1&w=2
Thanks,
Ming
.