On Tue 07-03-17 09:05:19, Darrick J. Wong wrote: > On Tue, Mar 07, 2017 at 04:48:42PM +0100, Michal Hocko wrote: > > From: Michal Hocko <mhocko@xxxxxxxx> > > > > KM_MAYFAIL didn't have any suitable GFP_FOO counterpart until recently > > so it relied on the default page allocator behavior for the given set > > of flags. This means that small allocations actually never failed. > > > > Now that we have __GFP_RETRY_MAYFAIL flag which works independently on the > > allocation request size we can map KM_MAYFAIL to it. The allocator will > > try as hard as it can to fulfill the request but fails eventually if > > the progress cannot be made. > > > > Cc: Darrick J. Wong <darrick.wong@xxxxxxxxxx> > > Signed-off-by: Michal Hocko <mhocko@xxxxxxxx> > > --- > > fs/xfs/kmem.h | 10 ++++++++++ > > 1 file changed, 10 insertions(+) > > > > diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h > > index ae08cfd9552a..ac80a4855c83 100644 > > --- a/fs/xfs/kmem.h > > +++ b/fs/xfs/kmem.h > > @@ -54,6 +54,16 @@ kmem_flags_convert(xfs_km_flags_t flags) > > lflags &= ~__GFP_FS; > > } > > > > + /* > > + * Default page/slab allocator behavior is to retry for ever > > + * for small allocations. We can override this behavior by using > > + * __GFP_RETRY_MAYFAIL which will tell the allocator to retry as long > > + * as it is feasible but rather fail than retry for ever for all > > s/for ever/forever/ fixed > > > + * request sizes. > > + */ > > + if (flags & KM_MAYFAIL) > > + lflags |= __GFP_RETRY_MAYFAIL; > > But otherwise seems ok from a quick grep -B5 MAYFAIL through the XFS code. > > (Has this been tested anywhere?) not yet, this is more for a discussion at this stage. I plan to run it through xfstests once we agree on the proper semantic. I have to confess I rely on the proper KM_MAYFAIL annotations here, though. Thanks! -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>