Hi, John
在 2022/09/27 21:45, John Garry 写道:
On 27/09/2022 14:14, Yu Kuai wrote:
Hi, John
在 2022/09/27 21:06, John Garry 写道:
On 27/09/2022 14:01, Yu Kuai wrote:
This reverts commit 24cd0b9bfdff126c066032b0d40ab0962d35e777.
1) commit 4e89dce72521 ("iommu/iova: Retry from last rb tree node if
iova search fails") tries to fix that iova allocation can fail while
there are still free space available. This is not backported to 5.10
stable.
This arrived in 5.11, I think
2) commit fce54ed02757 ("scsi: hisi_sas: Limit max hw sectors for v3
HW") fix the performance regression introduced by 1), however, this
is just a temporary solution and will cause io performance regression
because it limit max io size to PAGE_SIZE * 32(128k for 4k page_size).
Did you really notice a performance regression? In what scenario?
which kernel versions?
We are using 5.10, and test tool is fs_mark and it's doing writeback,
and benefits from io merge, before this patch, avgqusz is 300+, and this
patch will limit avgqusz to 128.
OK, so I think it's ok to revert for 5.10
I think that in any other case that io size is greater than 128k, this
patch will probably have defects.
However both 5.15 stable and 5.19 mainline include fce54ed02757 - it was
automatically backported for 5.15 stable. Please double check that.
And can you also check performance there for those kernels?
I'm pretty sure io split can decline performance, especially for HDD,
because blk-mq can't guarantee that split io can be dispatched to disk
sequentially. However, this is usually not common with proper
max_sectors_kb.
Here is an example that if max_sector_kb is 128k, performance will
drop a lot under high concurrency:
https://lore.kernel.org/all/20220408073916.1428590-1-yukuai3@xxxxxxxxxx/
Here I set max_sectors_kb to 128k manually, and 1m random io performance
will drop while io concurrency increase:
| numjobs | v5.18-rc1 |
| ------- | --------- |
| 1 | 67.7 |
| 2 | 67.7 |
| 4 | 67.7 |
| 8 | 67.7 |
| 16 | 64.8 |
| 32 | 59.8 |
| 64 | 54.9 |
| 128 | 49 |
| 256 | 37.7 |
| 512 | 31.8 |
Thanks,
Kuai
The reason which we had fce54ed02757 was because 4e89dce72521 hammered
performance when IOMMU enabled, and at least I saw no performance
regression for fce54ed02757 in other scenarios.
Thanks,
John
.