On Sun 04-06-17 21:26:06, Linus Torvalds wrote: [...] > Also adding some VM people, because I think it's ridiculous that the > 0-order allocation failed in the first place. Full report attached, > there's tons of memory that should have been trivial to free. Node 0 DMA32 free:64228kB min:12788kB low:15984kB high:19180kB active_anon:1222056kB inactive_anon:257972kB active_file:38748kB inactive_file:1065188kB unevictable:0kB writepending:916280kB present:3173852kB managed:3104264kB mlocked:0kB slab_reclaimable:39816kB slab_unreclaimable:6812kB kernel_stack:416kB pagetables:5472kB bounce:0kB free_pcp:860kB local_pcp:12kB free_cma:0kB lowmem_reserve[]: 0 0 12894 12894 Node 0 Normal free:54304kB min:54728kB low:68408kB high:82088kB active_anon:3323732kB inactive_anon:819528kB active_file:4876676kB inactive_file:3096768kB unevictable:448kB writepending:812kB present:13484032kB managed:13206364kB mlocked:448kB slab_reclaimable:702912kB slab_unreclaimable:89800kB kernel_stack:13248kB pagetables:70472kB bounce:0kB free_pcp:4808kB local_pcp:628kB free_cma:0kB lowmem_reserve[]: 0 0 0 0 Yeah, there is a lot of reclaimable memory. And I suspect that the direct reclaim has freed some of it (well, actually SWAP_CLUSTER_MAX as we asked for). > So I suspect GFP_NORETRY ends up being *much* too aggressive, and > basically doesn't even try any trivial freeing. Well, __GFP_NORETRY has been failing after a _single_ reclaim attempt failure (__alloc_pages_direct_reclaim). Nothing has changed recently in that regards. So if there is a heavy sustained memory pressure which keeps consuming reclaimed pages while the reclaimer has any chance to consume them then __GFP_NORETRY fails. > Maybe we want some middle ground between "retry forever" and "don't > try at all". In people trying to fight the "retry forever", we seem to > have gone too far in the "don't even bother, just return NULL" > direction. Yes, I am trying to convert GFP_REPEAT into something like that for quite some time. See http://lkml.kernel.org/r/20170307154843.32516-1-mhocko@xxxxxxxxxx -- Michal Hocko SUSE Labs _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx