Re: [PATCH v3 10/18] mm: Allow non-hugetlb large folios to be batch processed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/03/2024 21:52, Matthew Wilcox wrote:
> On Sun, Mar 10, 2024 at 08:46:58PM +0000, Matthew Wilcox wrote:
>> On Sun, Mar 10, 2024 at 07:59:46PM +0000, Ryan Roberts wrote:
>>> I've now been able to repro this without any of my code on top - just mm-unstable and your fix for the the memcg uncharging ordering issue. So we have separate, more difficultt to repro bug. I've discovered CONFIG_DEBUG_LIST so enabled that. I'll try to bisect in the morning, but I suspect it will be slow going.
>>>
>>> [  390.317982] ------------[ cut here ]------------
>>> [  390.318646] list_del corruption. prev->next should be fffffc00152a9090, but was fffffc002798a490. (prev=fffffc002798a490)
>>
>> Interesting.  So prev->next is pointing to prev, ie prev is an empty
>> list, but it should be pointing to this entry ... this is feeling like
>> another missing lock.
> 
> Let's check that we're not inverting the order of memcg_uncharge and
> removing a folio from the deferred list (build tested only, but only
> one line of this will be new to you):

OK found it - its another instance of the same issue...

Applied your below patch (resulting code: mm-unstable (d7182786dd0a) + yesterday's fix ("mm: Remove folio from deferred split list before uncharging it") + below patch).

The new check triggered:

[  153.459843] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0xffffd5fc0 pfn:0x4da690
[  153.460667] head: order:4 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[  153.461218] memcg:ffff0000c7fa1000
[  153.461519] anon flags: 0xbfffc00000a0048(uptodate|head|mappedtodisk|swapbacked|node=0|zone=2|lastcpupid=0xffff)
[  153.462678] page_type: 0xffffffff()
[  153.463294] raw: 0bfffc00000a0048 dead000000000100 dead000000000122 ffff0000fbfa29c1
[  153.470267] raw: 0000000ffffd5fc0 0000000000000000 00000000ffffffff ffff0000c7fa1000
[  153.471395] head: 0bfffc00000a0048 dead000000000100 dead000000000122 ffff0000fbfa29c1
[  153.472494] head: 0000000ffffd5fc0 0000000000000000 00000000ffffffff ffff0000c7fa1000
[  153.473357] head: 0bfffc0000020204 fffffc001269a401 dead000000000122 00000000ffffffff
[  153.481663] head: 0000001000000000 0000000000000000 00000000ffffffff 0000000000000000
[  153.482438] page dumped because: VM_BUG_ON_FOLIO(folio_order(folio) > 1 && !list_empty(&folio->_deferred_list))
[  153.483464] ------------[ cut here ]------------
[  153.484000] kernel BUG at mm/memcontrol.c:7486!
[  153.484484] Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP
[  153.485249] Modules linked in:
[  153.485621] CPU: 33 PID: 2146 Comm: usemem Not tainted 6.8.0-rc5-00463-gb5100df1d6f3 #5
[  153.486552] Hardware name: linux,dummy-virt (DT)
[  153.487300] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[  153.488363] pc : uncharge_folio+0x1d0/0x2c8
[  153.488922] lr : uncharge_folio+0x1d0/0x2c8
[  153.489384] sp : ffff80008ea0b6d0
[  153.489747] x29: ffff80008ea0b6d0 x28: 0000000000000000 x27: 00000000fffffffe
[  153.490626] x26: dead000000000100 x25: dead000000000122 x24: 0000000000000020
[  153.491435] x23: ffff80008ea0b918 x22: ffff0000c7f88850 x21: ffff0000c7f88800
[  153.492255] x20: ffff80008ea0b730 x19: fffffc001269a400 x18: 0000000000000006
[  153.493087] x17: 212026262031203e x16: 20296f696c6f6628 x15: 0720072007200720
[  153.494175] x14: 0720072007200720 x13: 0720072007200720 x12: 0720072007200720
[  153.495186] x11: 0720072007200720 x10: ffff0013f5e7b7c0 x9 : ffff800080128e84
[  153.496142] x8 : 00000000ffffbfff x7 : ffff0013f5e7b7c0 x6 : 80000000ffffc000
[  153.497050] x5 : ffff0013a5987d08 x4 : 0000000000000000 x3 : 0000000000000000
[  153.498041] x2 : 0000000000000000 x1 : ffff0000cbc2c500 x0 : 0000000000000063
[  153.499149] Call trace:
[  153.499470]  uncharge_folio+0x1d0/0x2c8
[  153.500045]  __mem_cgroup_uncharge_folios+0x5c/0xb0
[  153.500795]  move_folios_to_lru+0x5bc/0x5e0
[  153.501275]  shrink_lruvec+0x5f8/0xb30
[  153.501833]  shrink_node+0x4d8/0x8b0
[  153.502227]  do_try_to_free_pages+0xe0/0x5a8
[  153.502835]  try_to_free_mem_cgroup_pages+0x128/0x2d0
[  153.503708]  try_charge_memcg+0x114/0x658
[  153.504344]  __mem_cgroup_charge+0x6c/0xd0
[  153.505007]  __handle_mm_fault+0x42c/0x1640
[  153.505684]  handle_mm_fault+0x70/0x290
[  153.506136]  do_page_fault+0xfc/0x4d8
[  153.506659]  do_translation_fault+0xa4/0xc0
[  153.507140]  do_mem_abort+0x4c/0xa8
[  153.507716]  el0_da+0x2c/0x78
[  153.508169]  el0t_64_sync_handler+0xb8/0x130
[  153.508810]  el0t_64_sync+0x190/0x198
[  153.509410] Code: 910c8021 a9025bf5 a90363f7 97fd7bef (d4210000) 
[  153.510309] ---[ end trace 0000000000000000 ]---
[  153.510974] Kernel panic - not syncing: Oops - BUG: Fatal exception
[  153.511727] SMP: stopping secondary CPUs
[  153.513519] Kernel Offset: disabled
[  153.514090] CPU features: 0x0,00000020,7002014a,2140720b
[  153.514960] Memory Limit: none
[  153.515457] ---[ end Kernel panic - not syncing: Oops - BUG: Fatal exception ]---


move_folios_to_lru+0x5bc/0x5e0 is:

static unsigned int move_folios_to_lru(struct lruvec *lruvec,
		struct list_head *list)
{
	...

	if (free_folios.nr) {
		spin_unlock_irq(&lruvec->lru_lock);
		mem_cgroup_uncharge_folios(&free_folios);  <<<<<<<<<<< HERE
		free_unref_folios(&free_folios);
		spin_lock_irq(&lruvec->lru_lock);
	}

	return nr_moved;
}

And that code is from your commit 29f3843026cf ("mm: free folios directly in move_folios_to_lru()") which is another patch in the same series. This suffers from the same problem; uncharge before removing folio from deferred list, so using wrong lock - there are 2 sites in this function that does this.

A quick grep over the entire series has a lot of hits for "uncharge". I wonder if we need a full audit of that series for other places that could potentially be doing the same thing?


> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index bb57b3d0c8cd..61fd1a4b424d 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -792,8 +792,6 @@ void folio_prep_large_rmappable(struct folio *folio)
>  {
>  	if (!folio || !folio_test_large(folio))
>  		return;
> -	if (folio_order(folio) > 1)
> -		INIT_LIST_HEAD(&folio->_deferred_list);
>  	folio_set_large_rmappable(folio);
>  }
>  
> diff --git a/mm/internal.h b/mm/internal.h
> index 79d0848c10a5..690c68c18c23 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -525,6 +525,8 @@ static inline void prep_compound_head(struct page *page, unsigned int order)
>  	atomic_set(&folio->_entire_mapcount, -1);
>  	atomic_set(&folio->_nr_pages_mapped, 0);
>  	atomic_set(&folio->_pincount, 0);
> +	if (order > 1)
> +		INIT_LIST_HEAD(&folio->_deferred_list);
>  }
>  
>  static inline void prep_compound_tail(struct page *head, int tail_idx)
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 138bcfa18234..e2334c4ee550 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -7483,6 +7483,8 @@ static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug)
>  	struct obj_cgroup *objcg;
>  
>  	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
> +	VM_BUG_ON_FOLIO(folio_order(folio) > 1 &&
> +			!list_empty(&folio->_deferred_list), folio);
>  
>  	/*
>  	 * Nobody should be changing or seriously looking at
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index bdff5c0a7c76..1c1925b92934 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1006,10 +1006,11 @@ static int free_tail_page_prepare(struct page *head_page, struct page *page)
>  		}
>  		break;
>  	case 2:
> -		/*
> -		 * the second tail page: ->mapping is
> -		 * deferred_list.next -- ignore value.
> -		 */
> +		/* the second tail page: deferred_list overlaps ->mapping */
> +		if (unlikely(!list_empty(&folio->_deferred_list))) {
> +			bad_page(page, "on deferred list");
> +			goto out;
> +		}
>  		break;
>  	default:
>  		if (page->mapping != TAIL_MAPPING) {





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux