On Wed, Mar 18, 2015 at 03:09:26PM +0100, Michal Hocko wrote: > page_cache_read has been historically using page_cache_alloc_cold to > allocate a new page. This means that mapping_gfp_mask is used as the > base for the gfp_mask. Many filesystems are setting this mask to > GFP_NOFS to prevent from fs recursion issues. page_cache_read is, > however, not called from the fs layer so it doesn't need this > protection. Even ceph and ocfs2 which call filemap_fault from their > fault handlers seem to be OK because they are not taking any fs lock > before invoking generic implementation. > > The protection might be even harmful. There is a strong push to fail > GFP_NOFS allocations rather than loop within allocator indefinitely with > a very limited reclaim ability. Once we start failing those requests > the OOM killer might be triggered prematurely because the page cache > allocation failure is propagated up the page fault path and end up in > pagefault_out_of_memory. > > Use GFP_KERNEL mask instead because it is safe from the reclaim > recursion POV. We are already doing GFP_KERNEL allocations down > add_to_page_cache_lru path. > > Reported-by: Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx> > Signed-off-by: Michal Hocko <mhocko@xxxxxxx> I'm very far behind after LSF/MM so do not know where this came out of but it loses addressing restriction hints from the driver such as drivers/gpu/drm/gma500/gem.c: mapping_set_gfp_mask(r->gem.filp->f_mapping, GFP_KERNEL | __GFP_DMA32); It also loses mobility hints for fragmentation avoidance. fs/inode.c: mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE); If users of mapping_set_gfp_mask are now being ignored then it should at least trigger a once-off warning that the flags are being ignored so it's obvious if a recursion does occur and cause problems. -- Mel Gorman SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>