Excerpts from Andy Lutomirski's message of December 28, 2020 4:28 am: > The old sync_core_before_usermode() comments said that a non-icache-syncing > return-to-usermode instruction is x86-specific and that all other > architectures automatically notice cross-modified code on return to > userspace. Based on my general understanding of how CPUs work and based on > my atttempt to read the ARM manual, this is not true at all. In fact, x86 > seems to be a bit of an anomaly in the other direction: x86's IRET is > unusually heavyweight for a return-to-usermode instruction. "sync_core_before_usermode" as I've said says nothing to arch, or to the scheduler, or to membarrier. It's badly named to start with so if renaming it it should be something else. exit_lazy_tlb() at least says something quite precise to scheudler and arch code that implements the membarrier. But I don't mind the idea of just making it x86 specific if as you say the arch code can detect lazy mm switches more precisely than generic and you want to do that. > So let's drop any pretense that we can have a generic way implementation > behind membarrier's SYNC_CORE flush and require all architectures that opt > in to supply their own. This means x86, arm64, and powerpc for now. Let's > also rename the function from sync_core_before_usermode() to > membarrier_sync_core_before_usermode() because the precise flushing details > may very well be specific to membarrier, and even the concept of > "sync_core" in the kernel is mostly an x86-ism. The concept of "sync_core" (x86: serializing instruction, powerpc: context synchronizing instruction, etc) is not an x86-ism at all. x86 just wanted to add a serializing instruction to generic code so it grew this nasty API, but the concept applies broadly. > > I admit that I'm rather surprised that the code worked at all on arm64, > and I'm suspicious that it has never been very well tested. My apologies > for not reviewing this more carefully in the first place. > > Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> > Cc: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx> > Cc: Paul Mackerras <paulus@xxxxxxxxx> > Cc: linuxppc-dev@xxxxxxxxxxxxxxxx > Cc: Nicholas Piggin <npiggin@xxxxxxxxx> > Cc: Catalin Marinas <catalin.marinas@xxxxxxx> > Cc: Will Deacon <will@xxxxxxxxxx> > Cc: linux-arm-kernel@xxxxxxxxxxxxxxxxxxx > Cc: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> > Cc: x86@xxxxxxxxxx > Cc: stable@xxxxxxxxxxxxxxx > Fixes: 70216e18e519 ("membarrier: Provide core serializing command, *_SYNC_CORE") > Signed-off-by: Andy Lutomirski <luto@xxxxxxxxxx> > --- > > Hi arm64 and powerpc people- > > This is part of a series here: > > https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/log/?h=x86/fixes > > Before I send out the whole series, I'm hoping that some arm64 and powerpc > people can help me verify that I did this patch right. Once I get > some feedback on this patch, I'll send out the whole pile. And once > *that's* done, I'll start giving the mm lazy stuff some serious thought. > > The x86 part is already fixed in Linus' tree. > > Thanks, > Andy > > arch/arm64/include/asm/sync_core.h | 21 +++++++++++++++++++++ > arch/powerpc/include/asm/sync_core.h | 20 ++++++++++++++++++++ > arch/x86/Kconfig | 1 - > arch/x86/include/asm/sync_core.h | 7 +++---- > include/linux/sched/mm.h | 1 - > include/linux/sync_core.h | 21 --------------------- > init/Kconfig | 3 --- > kernel/sched/membarrier.c | 15 +++++++++++---- > 8 files changed, 55 insertions(+), 34 deletions(-) > create mode 100644 arch/arm64/include/asm/sync_core.h > create mode 100644 arch/powerpc/include/asm/sync_core.h > delete mode 100644 include/linux/sync_core.h > > diff --git a/arch/arm64/include/asm/sync_core.h b/arch/arm64/include/asm/sync_core.h > new file mode 100644 > index 000000000000..5be4531caabd > --- /dev/null > +++ b/arch/arm64/include/asm/sync_core.h > @@ -0,0 +1,21 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef _ASM_ARM64_SYNC_CORE_H > +#define _ASM_ARM64_SYNC_CORE_H > + > +#include <asm/barrier.h> > + > +/* > + * Ensure that the CPU notices any instruction changes before the next time > + * it returns to usermode. > + */ > +static inline void membarrier_sync_core_before_usermode(void) > +{ > + /* > + * XXX: is this enough or do we need a DMB first to make sure that > + * writes from other CPUs become visible to this CPU? We have an > + * smp_mb() already, but that's not quite the same thing. > + */ > + isb(); > +} > + > +#endif /* _ASM_ARM64_SYNC_CORE_H */ > diff --git a/arch/powerpc/include/asm/sync_core.h b/arch/powerpc/include/asm/sync_core.h > new file mode 100644 > index 000000000000..71dfbe7794e5 > --- /dev/null > +++ b/arch/powerpc/include/asm/sync_core.h > @@ -0,0 +1,20 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef _ASM_POWERPC_SYNC_CORE_H > +#define _ASM_POWERPC_SYNC_CORE_H > + > +#include <asm/barrier.h> > + > +/* > + * Ensure that the CPU notices any instruction changes before the next time > + * it returns to usermode. > + */ > +static inline void membarrier_sync_core_before_usermode(void) > +{ > + /* > + * XXX: I know basically nothing about powerpc cache management. > + * Is this correct? > + */ > + isync(); This is not about memory ordering or cache management, it's about pipeline management. Powerpc's return to user mode serializes the CPU (aka the hardware thread, _not_ the core; another wrongness of the name, but AFAIKS the HW thread is what is required for membarrier). So this is wrong, powerpc needs nothing here. Thanks, Nick