Re: [PATCH v4 3/4] io_uring/rsrc: add init and account functions for coalesced imus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 16 Jun 2024 18:43:13 +0100, Pavel Begunkov wrote:
> On 6/17/24 04:12, Chenliang Li wrote:
>> On Sun, 16 Jun 2024 19:04:38 +0100, Pavel Begunkov wrote:
>>> On 5/14/24 08:54, Chenliang Li wrote:
>>>> Introduce helper functions to check whether a buffer can
>>>> be coalesced or not, and gather folio data for later use.
>>>>
>>>> The coalescing optimizes time and space consumption caused
>>>> by mapping and storing multi-hugepage fixed buffers.
>>>>
>>>> A coalescable multi-hugepage buffer should fully cover its folios
>>>> (except potentially the first and last one), and these folios should
>>>> have the same size. These requirements are for easier later process,
>>>> also we need same size'd chunks in io_import_fixed for fast iov_iter
>>>> adjust.
>>>>
>>>> Signed-off-by: Chenliang Li <cliang01.li@xxxxxxxxxxx>
>>>> ---
>>>>    io_uring/rsrc.c | 78 +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>    io_uring/rsrc.h | 10 +++++++
>>>>    2 files changed, 88 insertions(+)
>>>>
>>>> diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
>>>> index 65417c9553b1..d08224c0c5b0 100644
>>>> --- a/io_uring/rsrc.c
>>>> +++ b/io_uring/rsrc.c
>>>> @@ -871,6 +871,84 @@ static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages,
>>>>    	return ret;
>>>>    }
>>>>    
>>>> +static bool __io_sqe_buffer_try_coalesce(struct page **pages, int nr_pages,
>>>> +					 struct io_imu_folio_data *data)
>>> io_can_coalesce_buffer(), you're not actually trying to
>>> do it here.
>> 
>> Will change it.
>> 
>>>> +static bool io_sqe_buffer_try_coalesce(struct page **pages, int nr_pages,
>>>> +				       struct io_imu_folio_data *data)
>>>> +{
>>>> +	int i, j;
>>>> +
>>>> +	if (nr_pages <= 1 ||
>>>> +		!__io_sqe_buffer_try_coalesce(pages, nr_pages, data))
>>>> +		return false;
>>>> +
>>>> +	/*
>>>> +	 * The pages are bound to the folio, it doesn't
>>>> +	 * actually unpin them but drops all but one reference,
>>>> +	 * which is usually put down by io_buffer_unmap().
>>>> +	 * Note, needs a better helper.
>>>> +	 */
>>>> +	if (data->nr_pages_head > 1)
>>>> +		unpin_user_pages(&pages[1], data->nr_pages_head - 1);
>>> Should be pages[0]. page[1] can be in another folio, and even
>>> though data->nr_pages_head > 1 protects against touching it,
>>> it's still flimsy.
>> 
>> But here it is unpinning the tail pages inside those coalesceable folios,
>> I think we only unpin pages[0] when failure, am I right? And in
>> __io_sqe_buffer_try_coalesce we have ensured that pages[1:nr_head_pages] are
>> in same folio and contiguous.
>
> We want the entire folio to still be pinned, but don't want to
> leave just one reference and not care down the line how many
> refcounts / etc. you have to put down.
> 
> void unpin_user_page(struct page *page)
> {
> 	sanity_check_pinned_pages(&page, 1);
> 	gup_put_folio(page_folio(page), 1, FOLL_PIN);
> }
>
> And all that goes to the folio as a single object, so doesn't
> really matter which page you pass. Anyway, let's then leave it
> as is then, I wish there would be unpin_folio_nr(), but there
> is unpin_user_page_range_dirty_lock() resembling it.

I see. Thanks for the explanation.

>>>> +
>>>> +	j = data->nr_pages_head;
>>>> +	nr_pages -= data->nr_pages_head;
>>>> +	for (i = 1; i < data->nr_folios; i++) {
>>>> +		unsigned int nr_unpin;
>>>> +
>>>> +		nr_unpin = min_t(unsigned int, nr_pages - 1,
>>>> +					data->nr_pages_mid - 1);
>>>> +		if (nr_unpin == 0)
>>>> +			break;
>>>> +		unpin_user_pages(&pages[j+1], nr_unpin);
>>> same
>>>> +		j += data->nr_pages_mid;
>>> And instead of duplicating this voodoo iteration later,
>>> please just assemble a new compacted ->nr_folios sized
>>> page array.
>> 
>> Indeed, a new page array would make things a lot easier.
>> If alloc overhead is not a concern here, then yeah I'll change it.
>
> It's not, and the upside is reducing memory footprint,
> which would be noticeable with huge pages. It's also
> kvmalloc'ed, so compacting also improves the TLB situation.

OK, will use a new page array.

Thanks,
Chenliang Li




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux