On 11/21/24 17:05, Jens Axboe wrote:
On 11/21/24 9:57 AM, Jens Axboe wrote:
I did run a basic IRQ storage test as-is, and will compare that with the
llist stuff we have now. Just in terms of overhead. It's not quite a
networking test, but you do get the IRQ side and some burstiness in
terms of completions that way too, at high rates. So should be roughly
comparable.
Perf looks comparable, it's about 60M IOPS. Some fluctuation with IRQ
60M with iopoll? That one normally shouldn't use use task_work
driven, so won't render an opinion on whether one is faster than the
other. What is visible though is that adding and running local task_work
drops from 2.39% to 2.02% using spinlock + io_wq_work_list over llist,
Do you summed it up with io_req_local_work_add()? Just sounds a bit
weird since it's usually run off [soft]irq. I have doubts that part
became faster. Running could be, especially with high QD and
consistency of SSD. Btw, what QD was it? 32?
and we entirely drop 2.2% of list reversing in the process.
We actually discussed it before but in some different patchset,
perf is not helpful much here, the overhead and cache loading
moves around a lot between functions.
I don't think we have a solid proof here, especially for networking
workloads, which tend to hammer it more from more CPUs. Can we run
some net benchmarks? Even better to do a good prod experiment.
--
Pavel Begunkov