Re: vmalloc_node_range for size 4198400 failed: Address range restricted to 0xf1000000 - 0xf5110000 (kernel 6.14-rc4, ppc32)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Erhard Furtner <erhard_f@xxxxxxxxxxx> writes:

> Greetings!
>
> At boot with a KASAN-enabled v6.14-rc4 kernel on my PowerMac G4 DP I get:
>
> [...]
> vmalloc_node_range for size 4198400 failed: Address range restricted to 0xf1000000 - 0xf5110000
> swapon: vmalloc error: size 4194304, vm_struct allocation failed, mode:0xdc0(GFP_KERNEL|__GFP_ZERO), nodemask=(null),cpuset=openrc.swap,mems_allowed=0

Did we exhaust the vmalloc area completely?


> CPU: 0 UID: 0 PID: 870 Comm: swapon Tainted: G        W          6.14.0-rc4-PMacG4 #6
> Tainted: [W]=WARN
> Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
> Call Trace:
> [f2ffb9d0] [c14cfd88] dump_stack_lvl+0x70/0x8c (unreliable)
> [f2ffb9f0] [c04fb9b8] warn_alloc+0x154/0x2b8
> [f2ffbab0] [c04de94c] __vmalloc_node_range_noprof+0x154/0x958
> [f2ffbb80] [c04df23c] __vmalloc_node_noprof+0xec/0xf4
> [f2ffbbc0] [c0558524] swap_cgroup_swapon+0x70/0x198
> [f2ffbbf0] [c051e8d8] sys_swapon+0x1838/0x3624
> [f2ffbce0] [c001e574] system_call_exception+0x2dc/0x420

Since only the swapon failed, I think you might still have the console
up right? So this is mostly a vmalloc allocation failure report?


> [f2ffbf30] [c00291ac] ret_from_syscall+0x0/0x2c
> --- interrupt: c00 at 0x2612ec
> NIP:  002612ec LR: 00534108 CTR: 001e8310
> REGS: f2ffbf40 TRAP: 0c00   Tainted: G        W           (6.14.0-rc4-PMacG4)
> MSR:  0000d032 <EE,PR,ME,IR,DR,RI>  CR: 24002444  XER: 00000000
>
> GPR00: 00000057 afe3ef20 a7a95540 01b2bdd0 00000000 24002444 fe5ff7e1 00247c24 
> GPR08: 0000d032 0000fa89 01b2d568 001e8310 24002448 0054fe14 02921154 00000000 
> GPR16: 00000000 00534b50 afe3f0ac afe3f0b0 00000000 00000000 0055001c afe3f0d0 
> GPR24: afe3f0b0 00000003 00000000 00001000 01b2bdd0 00000002 005579ec 01b2d570 
> NIP [002612ec] 0x2612ec
> LR [00534108] 0x534108
> --- interrupt: c00
> Mem-Info:
> active_anon:1989 inactive_anon:0 isolated_anon:0
>  active_file:6407 inactive_file:5879 isolated_file:0
>  unevictable:0 dirty:0 writeback:0
>  slab_reclaimable:1538 slab_unreclaimable:22927
>  mapped:2753 shmem:107 pagetables:182
>  sec_pagetables:0 bounce:0
>  kernel_misc_reclaimable:0
>  free:433110 free_pcp:472 free_cma:0
> Node 0 active_anon:7972kB inactive_anon:0kB active_file:25652kB inactive_file:23496kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:10908kB dirty:0kB writeback:0kB shmem:464kB writeback_tmp:0kB kernel_stack:1568kB pagetables:724kB sec_pagetables:0kB all_unreclaimable? no
> DMA free:591772kB boost:0kB min:3380kB low:4224kB high:5068kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:4kB inactive_file:11056kB unevictable:0kB writepending:0kB present:786432kB managed:716492kB mlocked:0kB bounce:0kB free_pcp:1680kB local_pcp:1180kB free_cma:0kB
> lowmem_reserve[]: 0 0 1184 0
> DMA: 127*4kB (UE) 66*8kB (UME) 37*16kB (UE) 78*32kB (UME) 10*64kB (UE) 4*128kB (UME) 3*256kB (UM) 6*512kB (UM) 5*1024kB (ME) 4*2048kB (M) 139*4096kB (M) = 591772kB
> 12404 total pagecache pages
> 0 pages in swap cache
> Free swap  = 0kB
> Total swap = 0kB
> 524288 pages RAM
> 327680 pages HighMem/MovableOnly
> 42061 pages reserved

Though above are mainly the physical mem info printed, but vmalloc can
also fail sometimes (e.g. this report), it is nice if we can print how
much of vmalloc space is free out of vmalloc total in show_mem() here.

Maybe linux-mm can tell if we should add this diff change for future?

diff --git a/mm/show_mem.c b/mm/show_mem.c
index 43afb56abbd3..b3af59fced02 100644
--- a/mm/show_mem.c
+++ b/mm/show_mem.c
@@ -14,6 +14,7 @@
 #include <linux/mmzone.h>
 #include <linux/swap.h>
 #include <linux/vmstat.h>
+#include <linux/vmalloc.h>

 #include "internal.h"
 #include "swap.h"
@@ -416,6 +417,8 @@ void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx)
        printk("%lu pages RAM\n", total);
        printk("%lu pages HighMem/MovableOnly\n", highmem);
        printk("%lu pages reserved\n", reserved);
+       printk("%lu pages Vmalloc Total\n", (unsigned long)VMALLOC_TOTAL >> PAGE_SHIFT);
+       printk("%lu pages Vmalloc Used\n", vmalloc_nr_pages());
 #ifdef CONFIG_CMA
        printk("%lu pages cma reserved\n", totalcma_pages);
 #endif


But meanwhile below data can give more details about the vmalloc area.

1. cat /proc/vmallocinfo   
2. cat /proc/meminfo       


-ritesh

> Memory allocations:
>     85.3 MiB     6104 mm/slub.c:2423 func:alloc_slab_page
>     38.5 MiB     9862 mm/readahead.c:187 func:ractl_alloc_folio
>     9.47 MiB     2425 mm/filemap.c:1970 func:__filemap_get_folio
>     7.96 MiB     2037 mm/kasan/shadow.c:304 func:kasan_populate_vmalloc_pte
>     7.87 MiB     2125 mm/execmem.c:44 func:execmem_vmalloc
>     5.01 MiB     1283 mm/memory.c:1063 func:folio_prealloc
>     4.00 MiB        1 fs/btrfs/zstd.c:366 [btrfs] func:zstd_alloc_workspace
>     3.86 MiB      247 lib/stackdepot.c:627 func:stack_depot_save_flags
>     3.62 MiB      412 mm/slub.c:2425 func:alloc_slab_page
>     3.09 MiB    18430 fs/kernfs/dir.c:624 func:__kernfs_new_node
> couldn't allocate enough memory for swap_cgroup
> swap_cgroup can be disabled by swapaccount=0 boot option
> [...]
>

> Does only happen with CONFIG_KASAN_INLINE=y but not with CONFIG_KASAN_OUTLINE=y.
>
> Kernel .config attached.
>
> Regards,
> Erhard




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux