Re: [PATCH RFC 00/17] playing around req alloc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/02/2021 03:23, Jens Axboe wrote:
> On 2/9/21 8:14 PM, Pavel Begunkov wrote:
>> On 10/02/2021 02:08, Jens Axboe wrote:
>>> On 2/9/21 5:03 PM, Pavel Begunkov wrote:
>>>> Unfolding previous ideas on persistent req caches. 4-7 including
>>>> slashed ~20% of overhead for nops benchmark, haven't done benchmarking
>>>> personally for this yet, but according to perf should be ~30-40% in
>>>> total. That's for IOPOLL + inline completion cases, obviously w/o
>>>> async/IRQ completions.
>>>
>>> And task_work, which is sort-of inline.
>>>
>>>> Jens,
>>>> 1. 11/17 removes deallocations on end of submit_sqes. Looks you
>>>> forgot or just didn't do that.
>>
>> And without the patches I added, it wasn't even necessary, so
>> nevermind
> 
> OK good, I was a bit confused about that one...
> 
>>>> 2. lists are slow and not great cache-wise, that why at I want at least
>>>> a combined approach from 12/17.
>>>
>>> This is only true if you're browsing a full list. If you do add-to-front
>>> for a cache, and remove-from-front, then cache footprint of lists are
>>> good.
>>
>> Ok, good point, but still don't think it's great. E.g. 7/17 did improve
>> performance a bit for me, as I mentioned in the related RFC. And that
>> was for inline-completed nops, and going over the list/array and
>> always touching all reqs.
> 
> Agree, array is always a bit better. Just saying that it's not a huge
> deal unless you're traversing the list, in which case lists are indeed
> horrible. But for popping off the first entry (or adding one), it's not
> bad at all.

btw, looks can be replaced with a singly-linked list (stack).

> 
>>>> 3. Instead of lists in "use persistent request cache" I had in mind a
>>>> slightly different way: to grow the req alloc cache to 32-128 (or hint
>>>> from the userspace), batch-alloc by 8 as before, and recycle _all_ reqs
>>>> right into it. If  overflows, do kfree().
>>>> It should give probabilistically high hit rate, amortising out most of
>>>> allocations. Pros: it doesn't grow ~infinitely as lists can. Cons: there
>>>> are always counter examples. But as I don't have numbers to back it, I
>>>> took your implementation. Maybe, we'll reconsider later.
>>>
>>> It shouldn't grow bigger than what was used, but the downside is that
>>> it will grow as big as the biggest usage ever. We could prune, if need
>>> be, of course.
>>
>> Yeah, that was the point. But not a deal-breaker in either case.
> 
> Agree
> 
>>> As far as I'm concerned, the hint from user space is the submit count.
>>
>> I mean hint on setup, like max QD, then we can allocate req cache
>> accordingly. Not like it matters
> 
> I'd rather grow it dynamically, only the first few iterations will hit
> the alloc. Which is fine, and better than pre-populating. Assuming I
> understood you correctly here...

I guess not, it's not about number of requests perse, but space in
alloc cache. Like that

struct io_submit_state {
	...
	void			*reqs[userspace_hint];
};

> 
>>>
>>>> I'll revise tomorrow on a fresh head + do some performance testing,
>>>> and is leaving it RFC until then.
>>>
>>> I'll look too and test this, thanks!
> 
> Tests out good for me with the suggested edits I made. nops are
> massively improved, as suspected. But also realistic workloads benefit
> nicely.
> 
> I'll send out a few patches I have on top tomorrow. Not fixes, but just
> further improvements/features/accounting.

Sounds great!
btw, "lock_free_list" which is not a "lock-free" list can be
confusing, I'd suggest to rename it to free_list_lock.

-- 
Pavel Begunkov



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux