On (09/04/19 08:15), Michal Hocko wrote: > > If you look at the original report, the failed allocation dump_stack() is, > > > > <IRQ> > > warn_alloc.cold.43+0x8a/0x148 > > __alloc_pages_nodemask+0x1a5c/0x1bb0 > > alloc_pages_current+0x9c/0x110 > > allocate_slab+0x34a/0x11f0 > > new_slab+0x46/0x70 > > ___slab_alloc+0x604/0x950 > > __slab_alloc+0x12/0x20 > > kmem_cache_alloc+0x32a/0x400 > > __build_skb+0x23/0x60 > > build_skb+0x1a/0xb0 > > igb_clean_rx_irq+0xafc/0x1010 [igb] > > igb_poll+0x4bb/0xe30 [igb] > > net_rx_action+0x244/0x7a0 > > __do_softirq+0x1a0/0x60a > > irq_exit+0xb5/0xd0 > > do_IRQ+0x81/0x170 > > common_interrupt+0xf/0xf > > </IRQ> > > > > Since it has no __GFP_NOWARN to begin with, it will call, I think that DEFAULT_RATELIMIT_INTERVAL and DEFAULT_RATELIMIT_BURST are good when we ratelimit just a single printk() call, so the ratelimit is "max 10 kernel log lines in 5 seconds". But the thing is different in case of dump_stack() + show_mem() + some other output. Because now we ratelimit not a single printk() line, but hundreds of them. The ratelimit becomes - 10 * $$$ lines in 5 seconds (IOW, now we talk about thousands of lines). Significantly more permissive ratelimiting. -ss