On Wed, 28 Sep 2022, Vlastimil Babka wrote: > On 9/28/22 15:48, Joel Fernandes wrote: > > On Wed, Sep 28, 2022 at 02:49:02PM +0900, Hyeonggon Yoo wrote: > >> On Tue, Sep 27, 2022 at 10:16:35PM -0700, Hugh Dickins wrote: > >>> It's a bug in linux-next, but taking me too long to identify which > >>> commit is "to blame", so let me throw it over to you without more > >>> delay: I think __PageMovable() now needs to check !PageSlab(). > > When I tried that, the result wasn't really nice: > > https://lore.kernel.org/all/aec59f53-0e53-1736-5932-25407125d4d4@xxxxxxx/ > > And what if there's another conflicting page "type" later. Or the debugging > variant of rcu_head in struct page itself. The __PageMovable() is just too > fragile. I don't disagree (and don't really know all the things you're thinking of in there). But if it's important to rescue this feature for 6.1, a different approach may be the very simple patch below (I met a similar issue with OPTIMIZE_FOR_SIZE in i915 a year ago, and just remembered). But you be the judge of it: (a) I do not know whether rcu_free_slab is the only risky address ever stuffed into that field; and (b) I'm clueless when it comes to those architectures (powerpc etc) where the the address of a function is something different from the address of the function (have I conveyed my cluelessness adequately?). Hugh --- a/mm/slub.c +++ b/mm/slub.c @@ -1953,7 +1953,12 @@ static void __free_slab(struct kmem_cach __free_pages(folio_page(folio, 0), order); } -static void rcu_free_slab(struct rcu_head *h) +/* + * rcu_free_slab() must be __aligned(4) because its address is saved + * in the rcu_head field, which coincides with page->mapping, which + * causes trouble if compaction mistakes it for PAGE_MAPPING_MOVABLE. + */ +__aligned(4) static void rcu_free_slab(struct rcu_head *h) { struct slab *slab = container_of(h, struct slab, rcu_head);