On 3/21/24 18:14, Yu Kuai wrote:
在 2024/03/22 7:03, Bart Van Assche 写道:
That test does the following:
* Create two request queues with a shared tag set and with different
completion times (1 ms and 100 ms).
* Submit I/O to both request queues simultaneously and set the queue
depth for both tests to the number of tags. This creates contention
on tag allocation.
* After I/O finished, check that the fio job with the shortest
completion time submitted the most requests.
This test is a little one-sided, I'm curious how the following test
shows as well:
- some queue is under heavy IO pressure with lots of thread, and they
can use up all the drivers tags;
- one queue only issue one IO at a time, then how does IO latency shows
for this queue? I assume this can be bad with this patch because sbitmap
implementation can't gurantee this.
Are these use cases realistic? The sbitmap implementation guarantees
forward progress for all IO submitters in both cases and I think that's
sufficient. Let's optimize the block layer performance for the common
cases instead of keeping features that help rare workloads. If users
really want to improve fairness for the two workloads mentioned above
they can use e.g. the blk-iocost controller and give a higher weight to
low-latency workloads.
Thanks,
Bart.