Le 08/01/2022 à 17:43, Andy Lutomirski a écrit : > powerpc did the following on some, but not all, paths through > switch_mm_irqs_off(): > > /* > * Only need the full barrier when switching between processes. > * Barrier when switching from kernel to userspace is not > * required here, given that it is implied by mmdrop(). Barrier > * when switching from userspace to kernel is not needed after > * store to rq->curr. > */ > if (likely(!(atomic_read(&next->membarrier_state) & > (MEMBARRIER_STATE_PRIVATE_EXPEDITED | > MEMBARRIER_STATE_GLOBAL_EXPEDITED)) || !prev)) > return; > > This is puzzling: if !prev, then one might expect that we are switching > from kernel to user, not user to kernel, which is inconsistent with the > comment. But this is all nonsense, because the one and only caller would > never have prev == NULL and would, in fact, OOPS if prev == NULL. > > In any event, this code is unnecessary, since the new generic > membarrier_finish_switch_mm() provides the same barrier without arch help. I can't find this function membarrier_finish_switch_mm(), neither in Linus tree, nor in linux-next tree. > > arch/powerpc/include/asm/membarrier.h remains as an empty header, > because a later patch in this series will add code to it. > > Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> > Cc: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx> > Cc: Paul Mackerras <paulus@xxxxxxxxx> > Cc: linuxppc-dev@xxxxxxxxxxxxxxxx > Cc: Nicholas Piggin <npiggin@xxxxxxxxx> > Cc: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> > Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > Signed-off-by: Andy Lutomirski <luto@xxxxxxxxxx> > --- > arch/powerpc/include/asm/membarrier.h | 24 ------------------------ > arch/powerpc/mm/mmu_context.c | 1 - > 2 files changed, 25 deletions(-) > > diff --git a/arch/powerpc/include/asm/membarrier.h b/arch/powerpc/include/asm/membarrier.h > index de7f79157918..b90766e95bd1 100644 > --- a/arch/powerpc/include/asm/membarrier.h > +++ b/arch/powerpc/include/asm/membarrier.h > @@ -1,28 +1,4 @@ > #ifndef _ASM_POWERPC_MEMBARRIER_H > #define _ASM_POWERPC_MEMBARRIER_H > > -static inline void membarrier_arch_switch_mm(struct mm_struct *prev, > - struct mm_struct *next, > - struct task_struct *tsk) > -{ > - /* > - * Only need the full barrier when switching between processes. > - * Barrier when switching from kernel to userspace is not > - * required here, given that it is implied by mmdrop(). Barrier > - * when switching from userspace to kernel is not needed after > - * store to rq->curr. > - */ > - if (IS_ENABLED(CONFIG_SMP) && > - likely(!(atomic_read(&next->membarrier_state) & > - (MEMBARRIER_STATE_PRIVATE_EXPEDITED | > - MEMBARRIER_STATE_GLOBAL_EXPEDITED)) || !prev)) > - return; > - > - /* > - * The membarrier system call requires a full memory barrier > - * after storing to rq->curr, before going back to user-space. > - */ > - smp_mb(); > -} > - > #endif /* _ASM_POWERPC_MEMBARRIER_H */ > diff --git a/arch/powerpc/mm/mmu_context.c b/arch/powerpc/mm/mmu_context.c > index 74246536b832..5f2daa6b0497 100644 > --- a/arch/powerpc/mm/mmu_context.c > +++ b/arch/powerpc/mm/mmu_context.c > @@ -84,7 +84,6 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, > asm volatile ("dssall"); > > if (!new_on_cpu) > - membarrier_arch_switch_mm(prev, next, tsk); Are you sure that's what you want ? It now means you have: if (!new_on_cpu) switch_mmu_context(prev, next, tsk); > > /* > * The actual HW switching method differs between the various