Re: + mm-page_alloc-protect-pcp-lists-with-a-spinlock-fix.patch added to mm-unstable branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 7 Jul 2022 18:35:12 -0600 Yu Zhao <yuzhao@xxxxxxxxxx> wrote:

> > This relentless drive towards mm-stable: I for one cannot keep up.
> > I'd like to ask for slowing down a bit - my intention had been to
> > reach testing maple tree again (it's not yet what I'd call stable),
> > but this and a couple of other issues got in the way.  More mails
> > to write.
> 
> Sorry for not being clear (it doesn't seem confusing to me because I
> don't trust the bot):
> 
> There were two reports, the first one [1] tested v4 without the fix
> [2]; the second one [3] tested v5 at patch 5/7 (not the whole series).
> 
> v4 + the fix is good; v5 the whole series is good.
> 
> Please do not drop patch 7/7 and do not add this fix.

As Vlastimil has sleuthed out that the
"BUG:sleeping_function_called_from_invalid_context_at_mm/gup.c"
reporter was using the v4 series, I've restored 7/7 ("mm/page_alloc:
replace local_lock with normal spinlock") for tomorrow's linux-next.

I retained this fix against 2/7 and reworked 7/7 appropriately.  So it
is now

/* Lock and remove page from the per-cpu list */
static struct page *rmqueue_pcplist(struct zone *preferred_zone,
			struct zone *zone, unsigned int order,
			gfp_t gfp_flags, int migratetype,
			unsigned int alloc_flags)
{
	struct per_cpu_pages *pcp;
	struct list_head *list;
	struct page *page;
	unsigned long flags;
	unsigned long __maybe_unused UP_flags;

	/*
	 * spin_trylock may fail due to a parallel drain. In the future, the
	 * trylock will also protect against IRQ reentrancy.
	 */
	pcp_trylock_prepare(UP_flags);
	pcp = pcp_spin_trylock_irqsave(zone->per_cpu_pageset, flags);
	if (!pcp) {
		pcp_trylock_finish(UP_flags);
		return NULL;
	}

	/*
	 * On allocation, reduce the number of pages that are batch freed.
	 * See nr_pcp_free() where free_factor is increased for subsequent
	 * frees.
	 */
	pcp->free_factor >>= 1;
	list = &pcp->lists[order_to_pindex(migratetype, order)];
	page = __rmqueue_pcplist(zone, order, migratetype, alloc_flags, pcp, list);
	pcp_spin_unlock_irqrestore(pcp, flags);
	pcp_trylock_finish(UP_flags);
	if (page) {
		__count_zid_vm_events(PGALLOC, page_zonenum(page), 1);
		zone_statistics(preferred_zone, zone, 1);
	}
	return page;
}

btw, the leading comment implies (to me) that the page is to be locked.
The comment could do with a rethink.





[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux