Re: [PATCH next v1 2/2] io_uring: limit local tw done

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/22/24 10:01 AM, Pavel Begunkov wrote:
> On 11/21/24 17:05, Jens Axboe wrote:
>> On 11/21/24 9:57 AM, Jens Axboe wrote:
>>> I did run a basic IRQ storage test as-is, and will compare that with the
>>> llist stuff we have now. Just in terms of overhead. It's not quite a
>>> networking test, but you do get the IRQ side and some burstiness in
>>> terms of completions that way too, at high rates. So should be roughly
>>> comparable.
>>
>> Perf looks comparable, it's about 60M IOPS. Some fluctuation with IRQ
> 
> 60M with iopoll? That one normally shouldn't use use task_work

Maybe that wasn't clear, but it's IRQ driven IO. Otherwise indeed
there'd be no task_work in use.

>> driven, so won't render an opinion on whether one is faster than the
>> other. What is visible though is that adding and running local task_work
>> drops from 2.39% to 2.02% using spinlock + io_wq_work_list over llist,
> 
> Do you summed it up with io_req_local_work_add()? Just sounds a bit
> weird since it's usually run off [soft]irq. I have doubts that part
> became faster. Running could be, especially with high QD and
> consistency of SSD. Btw, what QD was it? 32?

It may just trigger more in frequency in terms of profiling, since the
list reversal is done. Profiling isn't 100% exact.

>> and we entirely drop 2.2% of list reversing in the process.
> 
> We actually discussed it before but in some different patchset,
> perf is not helpful much here, the overhead and cache loading
> moves around a lot between functions.
> 
> I don't think we have a solid proof here, especially for networking
> workloads, which tend to hammer it more from more CPUs. Can we run
> some net benchmarks? Even better to do a good prod experiment.

Already in motion. I ran some here and didn't show any differences at
all, but task_work load was also fairly light. David is running the
networking side and we'll see what it says.

I don't particularly love list + lock for this, but at the end of the
day, the only real downside is the irq disabling nature of it.
Everything else is both simpler, and avoids the really annoying LIFO
nature of llist. I'd expect, all things being equal, that list + lock is
going to be ever so slightly slower. Both will bounce the list
cacheline, no difference in cost on that side. But when you add list
reversal to the mix, that's going to push it to being an overall win.

-- 
Jens Axboe




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux