Re: [PATCH RFC 8/9] RDMA/umem: batch page unpin in __ib_mem_release()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 12/8/20 7:29 PM, Jason Gunthorpe wrote:
> On Tue, Dec 08, 2020 at 05:29:00PM +0000, Joao Martins wrote:
> 
>>  static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int dirty)
>>  {
>> +	bool make_dirty = umem->writable && dirty;
>> +	struct page **page_list = NULL;
>>  	struct sg_page_iter sg_iter;
>> +	unsigned long nr = 0;
>>  	struct page *page;
>>  
>> +	page_list = (struct page **) __get_free_page(GFP_KERNEL);
> 
> Gah, no, don't do it like this!
> 
> Instead something like:
> 
> 	for_each_sg(umem->sg_head.sgl, sg, umem->nmap, i)
> 	      unpin_use_pages_range_dirty_lock(sg_page(sg), sg->length/PAGE_SIZE,
>                                                umem->writable && dirty);
> 
> And have the mm implementation split the contiguous range of pages into
> pairs of (compound head, ntails) with a bit of maths.
> 
Got it :)

I was trying to avoid another exported symbol.

Albeit upon your suggestion below, it doesn't justify the efficiency/clearness lost.

	Joao




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux