It literally directly follows a spin_lock() call. This whacks an explicit barrier on x86-64. Signed-off-by: Mateusz Guzik <mjguzik@xxxxxxxxx> --- This plausibly can go away altogether, but I could not be arsed to convince myself that's correct. Individuals willing to put in time are welcome :) fs/inode.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/inode.c b/fs/inode.c index e5a60084a7a9..b3db1234737f 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -817,7 +817,7 @@ static void evict(struct inode *inode) * ___wait_var_event() either sees the bit cleared or * waitqueue_active() check in wake_up_var() sees the waiter. */ - smp_mb(); + smp_mb__after_spinlock(); inode_wake_up_bit(inode, __I_NEW); BUG_ON(inode->i_state != (I_FREEING | I_CLEAR)); spin_unlock(&inode->i_lock); -- 2.43.0