Re: alloc_pages_bulk()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 11 Feb 2021 09:12:35 +0000
Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> wrote:

> On Wed, Feb 10, 2021 at 10:58:37PM +0000, Chuck Lever wrote:
> > > Not in the short term due to bug load and other obligations.
> > > 
> > > The original series had "mm, page_allocator: Only use per-cpu allocator
> > > for irq-safe requests" but that was ultimately rejected because softirqs
> > > were affected so it would have to be done without that patch.
> > > 
> > > The last patch can be rebased easily enough but it only batch allocates
> > > order-0 pages. It's also only build tested and could be completely
> > > miserable in practice and as I didn't even try boot test let, let alone
> > > actually test it, it could be a giant pile of crap. To make high orders
> > > work, it would need significant reworking but if the API showed even
> > > partial benefit, it might motiviate someone to reimplement the bulk
> > > interfaces to perform better.
> > > 
> > > Rebased diff, build tested only, might not even work  
> > 
> > Thanks, Mel, for kicking off a forward port.
> > 
> > It compiles. I've added a patch to replace the page allocation loop
> > in svc_alloc_arg() with a call to alloc_pages_bulk().
> > 
> > The server system deadlocks pretty quickly with any NFS traffic. Based
> > on some initial debugging, it appears that a pcplist is getting corrupted
> > and this causes the list_del() in __rmqueue_pcplist() to fail during a
> > a call to alloc_pages_bulk().
> >   
> 
> Parameters to __rmqueue_pcplist are garbage as the parameter order changed.
> I'm surprised it didn't blow up in a spectacular fashion. Again, this
> hasn't been near any testing and passing a list with high orders to
> free_pages_bulk() will corrupt lists too. Mostly it's a curiousity to see
> if there is justification for reworking the allocator to fundamentally
> deal in batches and then feed batches to pcp lists and the bulk allocator
> while leaving the normal GFP API as single page "batches". While that
> would be ideal, it's relatively high risk for regressions. There is still
> some scope for adding a basic bulk allocator before considering a major
> refactoring effort.

The alloc_flags reminds me that I have some asks around the semantics
of the API.  I'm concerned about the latency impact on preemption.  I
want us to avoid creating something that runs for too long with
IRQs/preempt disabled.

(With SLUB kmem_cache_free_bulk() we manage to run most of the time with
preempt and IRQs enabled.  So, I'm not worried about large slab bulk
free. For SLUB kmem_cache_alloc_bulk() we run with local_irq_disable(),
so I always recommend users not to do excessive bulk-alloc.)

For this page bulk alloc API, I'm fine with limiting it to only support
order-0 pages. (This will also fit nicely with the PCP system it think).

I also suggest the API can return less pages than requested. Because I
want to to "exit"/return if it need to go into an expensive code path
(like buddy allocator or compaction).  I'm assuming we have a flags to
give us this behavior (via gfp_flags or alloc_flags)?

My use-case is in page_pool where I can easily handle not getting exact
number of pages, and I want to handle low-latency network traffic.



> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index f8353ea7b977..8f3fe7de2cf7 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -5892,7 +5892,7 @@ __alloc_pages_bulk_nodemask(gfp_t gfp_mask, unsigned int order,
>  	pcp_list = &pcp->lists[migratetype];
>  
>  	while (nr_pages) {
> -		page = __rmqueue_pcplist(zone, gfp_mask, migratetype,
> +		page = __rmqueue_pcplist(zone, migratetype, alloc_flags,
>  								pcp, pcp_list);
>  		if (!page)
>  			break;



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer




[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux