On 09/13, Mina Almasry wrote: > Building net-next with powerpc with GCC 14 compiler results in this > build error: > > /home/sfr/next/tmp/ccuSzwiR.s: Assembler messages: > /home/sfr/next/tmp/ccuSzwiR.s:2579: Error: operand out of domain (39 is > not a multiple of 4) > make[5]: *** [/home/sfr/next/next/scripts/Makefile.build:229: > net/core/page_pool.o] Error 1 > > Root caused in this thread: > https://lore.kernel.org/netdev/913e2fbd-d318-4c9b-aed2-4d333a1d5cf0@xxxxxxxxxxxxxxxxxx/ > > We try to access offset 40 in the pointer returned by this function: > > static inline unsigned long _compound_head(const struct page *page) > { > unsigned long head = READ_ONCE(page->compound_head); > > if (unlikely(head & 1)) > return head - 1; > return (unsigned long)page_fixed_fake_head(page); > } > > The GCC 14 (but not 11) compiler optimizes this by doing: > > ld page + 39 > > Rather than: > > ld (page - 1) + 40 > > And causing an unaligned load. Get around this by issuing a READ_ONCE as > we convert the page to netmem. That disables the compiler optimizing the > load in this way. > > Cc: Simon Horman <horms@xxxxxxxxxx> > Cc: Stephen Rothwell <sfr@xxxxxxxxxxxxxxxx> > Cc: Jakub Kicinski <kuba@xxxxxxxxxx> > Cc: David Miller <davem@xxxxxxxxxxxxx> > Cc: Paolo Abeni <pabeni@xxxxxxxxxx> > Cc: Networking <netdev@xxxxxxxxxxxxxxx> > Cc: Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx> > Cc: Linux Next Mailing List <linux-next@xxxxxxxxxxxxxxx> > Cc: Arnd Bergmann <arnd@xxxxxxxx> > Cc: "linuxppc-dev@xxxxxxxxxxxxxxxx" <linuxppc-dev@xxxxxxxxxxxxxxxx> > Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> > > Suggested-by: Jakub Kicinski <kuba@xxxxxxxxxx> > Signed-off-by: Mina Almasry <almasrymina@xxxxxxxxxx> > > --- > > v2: https://lore.kernel.org/netdev/20240913192036.3289003-1-almasrymina@xxxxxxxxxx/ > > - Work around this issue as we convert the page to netmem, instead of > a generic change that affects compound_head(). > --- > net/core/page_pool.c | 15 ++++++++++++++- > 1 file changed, 14 insertions(+), 1 deletion(-) > > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index a813d30d2135..74ea491d0ab2 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -859,12 +859,25 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, > { > int i, bulk_len = 0; > bool allow_direct; > + netmem_ref netmem; > + struct page *page; > bool in_softirq; > > allow_direct = page_pool_napi_local(pool); > > for (i = 0; i < count; i++) { > - netmem_ref netmem = page_to_netmem(virt_to_head_page(data[i])); > + page = virt_to_head_page(data[i]); > + > + /* GCC 14 powerpc compiler will optimize reads into the > + * resulting netmem_ref into unaligned reads as it sees address > + * arithmetic in _compound_head() call that the page has come > + * from. > + * > + * The READ_ONCE here gets around that by breaking the > + * optimization chain between the address arithmetic and later > + * indexing. > + */ > + netmem = page_to_netmem(READ_ONCE(page)); > > /* It is not the last user for the page frag case */ > if (!page_pool_is_last_ref(netmem)) Are we sure this is the only place where we can hit by this? Any reason not to hide this inside page_to_netmem? diff --git a/include/net/netmem.h b/include/net/netmem.h index 8a6e20be4b9d..46bc362acec4 100644 --- a/include/net/netmem.h +++ b/include/net/netmem.h @@ -100,7 +100,7 @@ static inline netmem_ref net_iov_to_netmem(struct net_iov *niov) static inline netmem_ref page_to_netmem(struct page *page) { - return (__force netmem_ref)page; + return (__force netmem_ref)READ_ONCE(page); } static inline int netmem_ref_count(netmem_ref netmem) Is it gonna generate slower code elsewhere?