Re: [PATCH 02/11] mm/page_alloc: Convert per-cpu list protection to local_lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 08, 2021 at 06:42:44PM +0100, Mel Gorman wrote:
> On Thu, Apr 08, 2021 at 12:52:07PM +0200, Peter Zijlstra wrote:
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index a68bacddcae0..e9e60d1a85d4 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -112,6 +112,13 @@ typedef int __bitwise fpi_t;
> > >  static DEFINE_MUTEX(pcp_batch_high_lock);
> > >  #define MIN_PERCPU_PAGELIST_FRACTION	(8)
> > >  
> > > +struct pagesets {
> > > +	local_lock_t lock;
> > > +};
> > > +static DEFINE_PER_CPU(struct pagesets, pagesets) = {
> > > +	.lock = INIT_LOCAL_LOCK(lock),
> > > +};
> > 
> > So why isn't the local_lock_t in struct per_cpu_pages ? That seems to be
> > the actual object that is protected by it and is already per-cpu.
> > 
> > Is that because you want to avoid the duplication across zones? Is that
> > worth the effort?
> 
> When I wrote the patch, the problem was that zone_pcp_reset freed the
> per_cpu_pages structure and it was "protected" by local_irq_save(). If
> that was converted to local_lock_irq then the structure containing the
> lock is freed before it is released which is obviously bad.
> 
> Much later when trying to make the allocator RT-safe in general, I realised
> that locking was broken and fixed it in patch 3 of this series. With that,
> the local_lock could potentially be embedded within per_cpu_pages safely
> at the end of this series.

Fair enough; I was just wondering why the obvious solution wasn't chosen
and neither changelog nor comment explain, so I had to ask :-)



[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux