On Mon, Mar 25, 2024 at 06:46:29PM +0100, Pankaj Raghav (Samsung) wrote: > > > > The first part of the series (fs/xfs/kmem.[ch] removal) is straight > > forward. We've done lots of this stuff in the past leading up to > > the point; this is just converting the final remaining usage to the > > native kernel interface. The only down-side to this is that we end > > up propagating __GFP_NOFAIL everywhere into the code. This is no big > > deal for XFS - it's just formalising the fact that all our > > allocations are __GFP_NOFAIL by default, except for the ones we > > explicity mark as able to fail. This may be a surprise of people > > outside XFS, but we've been doing this for a couple of decades now > > and the sky hasn't fallen yet. > > Definetly a surprise to me. :) > > I rebased my LBS patches with these changes and generic/476 started to > break in page alloc[1]: > > static inline > struct page *rmqueue(struct zone *preferred_zone, > struct zone *zone, unsigned int order, > gfp_t gfp_flags, unsigned int alloc_flags, > int migratetype) > { > struct page *page; > > /* > * We most definitely don't want callers attempting to > * allocate greater than order-1 page units with __GFP_NOFAIL. > */ > WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); > ... Yeah, that warning needs to go. It's just unnecessary noise at this point in time - at minimum should be gated on __GFP_NOWARN. > The reason for this is the call from xfs_attr_leaf.c to allocate memory > with attr->geo->blksize, which is set to 1 FSB. As 1 FSB can correspond > to order > 1 in LBS, this WARN_ON_ONCE is triggered. > > This was not an issue before as xfs/kmem.c retried manually in a loop > without passing the __GFP_NOFAIL flag. Right, we've been doing this sort of "no fail" high order kmalloc thing for a couple of decades in XFS, explicitly to avoid arbitrary noise like this warning..... > As not all calls to kmalloc in xfs_attr_leaf.c call handles ENOMEM > errors, what would be the correct approach for LBS configurations? Use kvmalloc(). -Dave. -- Dave Chinner david@xxxxxxxxxxxxx