> > The first part of the series (fs/xfs/kmem.[ch] removal) is straight > forward. We've done lots of this stuff in the past leading up to > the point; this is just converting the final remaining usage to the > native kernel interface. The only down-side to this is that we end > up propagating __GFP_NOFAIL everywhere into the code. This is no big > deal for XFS - it's just formalising the fact that all our > allocations are __GFP_NOFAIL by default, except for the ones we > explicity mark as able to fail. This may be a surprise of people > outside XFS, but we've been doing this for a couple of decades now > and the sky hasn't fallen yet. Definetly a surprise to me. :) I rebased my LBS patches with these changes and generic/476 started to break in page alloc[1]: static inline struct page *rmqueue(struct zone *preferred_zone, struct zone *zone, unsigned int order, gfp_t gfp_flags, unsigned int alloc_flags, int migratetype) { struct page *page; /* * We most definitely don't want callers attempting to * allocate greater than order-1 page units with __GFP_NOFAIL. */ WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); ... The reason for this is the call from xfs_attr_leaf.c to allocate memory with attr->geo->blksize, which is set to 1 FSB. As 1 FSB can correspond to order > 1 in LBS, this WARN_ON_ONCE is triggered. This was not an issue before as xfs/kmem.c retried manually in a loop without passing the __GFP_NOFAIL flag. As not all calls to kmalloc in xfs_attr_leaf.c call handles ENOMEM errors, what would be the correct approach for LBS configurations? One possible idea is to use __GFP_RETRY_MAYFAIL for LBS configuration as it will resemble the way things worked before. Let me know your thoughts. -- Pankaj [1] https://elixir.bootlin.com/linux/v6.9-rc1/source/mm/page_alloc.c#L2902