On 10/03/2024 11:11, Matthew Wilcox wrote: > On Sun, Mar 10, 2024 at 11:01:06AM +0000, Ryan Roberts wrote: >>> So after my patch, instead of calling (in order): >>> >>> page_cache_release(folio); >>> folio_undo_large_rmappable(folio); >>> mem_cgroup_uncharge(folio); >>> free_unref_page() >>> >>> it calls: >>> >>> __page_cache_release(folio, &lruvec, &flags); >>> mem_cgroup_uncharge_folios() >>> folio_undo_large_rmappable(folio); >> >> I was just looking at this again, and something pops out... >> >> You have swapped the order of folio_undo_large_rmappable() and >> mem_cgroup_uncharge(). But folio_undo_large_rmappable() calls >> get_deferred_split_queue() which tries to get the split queue from >> folio_memcg(folio) first and falls back to pgdat otherwise. If you are now >> calling mem_cgroup_uncharge_folios() first, will that remove the folio from the >> cgroup? Then we are operating on the wrong list? (just a guess based on the name >> of the function...) > > Oh my. You've got it. This explains everything. Thank you! I've just taken today's mm-unstable, added your official patch to fix the ordering and applied my large folio swap-out series on top (v4, which I haven't posted yet). In testing that, I'm seeing another oops :-( That's exactly how I discovered the original problem, and was hoping that with your fix, this would unblock me. Given I can only repro this when my changes are on top, I guess my code is most likely buggy, but perhaps you can take a quick look at the oops and tell me what you think? [ 96.372503] BUG: Bad page state in process usemem pfn:be502 [ 96.373336] page: refcount:0 mapcount:0 mapping:000000005abfa8d5 index:0x0 pfn:0xbe502 [ 96.374341] aops:0x0 ino:fffffc0001f940c8 [ 96.374893] flags: 0x7fff8000000000(node=0|zone=0|lastcpupid=0xffff) [ 96.375653] page_type: 0xffffffff() [ 96.376071] raw: 007fff8000000000 0000000000000000 fffffc0001f94090 ffff0000c99ee860 [ 96.377055] raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000 [ 96.378650] page dumped because: non-NULL mapping [ 96.379828] Modules linked in: binfmt_misc nls_iso8859_1 dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua sch_fq_codel drm efi_pstore ip_tables x_tables autofs4 xfs btrfs blake2b_generic raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor xor_neon raid6_pq libcrc32c raid1 raid0 crct10dif_ce ghash_ce sha2_ce virtio_net sha256_arm64 net_failover sha1_ce virtio_blk failover virtio_scsi virtio_rng aes_neon_bs aes_neon_blk aes_ce_blk aes_ce_cipher [ 96.386802] CPU: 13 PID: 4713 Comm: usemem Not tainted 6.8.0-rc5-ryarob01-swap-out-v4 #2 [ 96.387691] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 [ 96.388887] Call trace: [ 96.389348] dump_backtrace+0x9c/0x128 [ 96.390213] show_stack+0x20/0x38 [ 96.390688] dump_stack_lvl+0x78/0xc8 [ 96.391163] dump_stack+0x18/0x28 [ 96.391545] bad_page+0x88/0x128 [ 96.391893] get_page_from_freelist+0xa94/0x1bc0 [ 96.392407] __alloc_pages+0x194/0x10b0 [ 96.392833] alloc_pages_mpol+0x98/0x278 [ 96.393278] vma_alloc_folio+0x74/0xd8 [ 96.393674] __handle_mm_fault+0x7ac/0x1470 [ 96.394146] handle_mm_fault+0x70/0x2c8 [ 96.394575] do_page_fault+0x100/0x530 [ 96.395013] do_translation_fault+0xa4/0xd0 [ 96.395476] do_mem_abort+0x4c/0xa8 [ 96.395869] el0_da+0x30/0xa8 [ 96.396229] el0t_64_sync_handler+0xb4/0x130 [ 96.396735] el0t_64_sync+0x1a8/0x1b0 [ 96.397133] Disabling lock debugging due to kernel taint [ 112.507052] Adding 36700156k swap on /dev/ram0. Priority:-2 extents:1 across:36700156k SS [ 113.131515] ------------[ cut here ]------------ [ 113.132190] UBSAN: array-index-out-of-bounds in mm/vmscan.c:1654:14 [ 113.132892] index 7 is out of range for type 'long unsigned int [5]' [ 113.133617] CPU: 9 PID: 528 Comm: kswapd0 Tainted: G B 6.8.0-rc5-ryarob01-swap-out-v4 #2 [ 113.134705] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 [ 113.135500] Call trace: [ 113.135776] dump_backtrace+0x9c/0x128 [ 113.136218] show_stack+0x20/0x38 [ 113.136574] dump_stack_lvl+0x78/0xc8 [ 113.136964] dump_stack+0x18/0x28 [ 113.137322] __ubsan_handle_out_of_bounds+0xa0/0xd8 [ 113.137885] isolate_lru_folios+0x57c/0x658 [ 113.138352] shrink_lruvec+0x5b4/0xdf8 [ 113.138751] shrink_node+0x3f0/0x990 [ 113.139152] balance_pgdat+0x3d0/0x810 [ 113.139579] kswapd+0x268/0x568 [ 113.139936] kthread+0x118/0x128 [ 113.140289] ret_from_fork+0x10/0x20 [ 113.140686] ---[ end trace ]--- The UBSAN issue reported for mm/vmscan.c:1654 is: nr_skipped[folio_zonenum(folio)] += nr_pages; nr_skipped is a stack array of 5 elements. So I guess folio_zonemem(folio) is returning 7. That comes from the flags. I guess this is most likely just a side effect of the corrupted folio due to someone writing to it while its on the free list?