On Thu 15-02-24 06:58:42, Suren Baghdasaryan wrote: > On Thu, Feb 15, 2024 at 1:22 AM Michal Hocko <mhocko@xxxxxxxx> wrote: > > > > On Mon 12-02-24 13:39:17, Suren Baghdasaryan wrote: > > [...] > > > @@ -423,4 +424,18 @@ void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx) > > > #ifdef CONFIG_MEMORY_FAILURE > > > printk("%lu pages hwpoisoned\n", atomic_long_read(&num_poisoned_pages)); > > > #endif > > > +#ifdef CONFIG_MEM_ALLOC_PROFILING > > > + { > > > + struct seq_buf s; > > > + char *buf = kmalloc(4096, GFP_ATOMIC); > > > + > > > + if (buf) { > > > + printk("Memory allocations:\n"); > > > + seq_buf_init(&s, buf, 4096); > > > + alloc_tags_show_mem_report(&s); > > > + printk("%s", buf); > > > + kfree(buf); > > > + } > > > + } > > > +#endif > > > > I am pretty sure I have already objected to this. Memory allocations in > > the oom path are simply no go unless there is absolutely no other way > > around that. In this case the buffer could be preallocated. > > Good point. We will change this to a smaller buffer allocated on the > stack and will print records one-by-one. Thanks! __show_mem could be called with a very deep call chains. A single pre-allocated buffer should just do ok. -- Michal Hocko SUSE Labs