Nicholas Piggin <npiggin@xxxxxxxxx> writes: > Will Deacon's on March 2, 2019 12:03 am: >> In preparation for removing all explicit mmiowb() calls from driver >> code, implement a tracking system in asm-generic based loosely on the >> PowerPC implementation. This allows architectures with a non-empty >> mmiowb() definition to have the barrier automatically inserted in >> spin_unlock() following a critical section containing an I/O write. > > Is there a reason to call this "mmiowb"? We already have wmb that > orders cacheable stores vs mmio stores don't we? > > Yes ia64 "sn2" is broken in that case, but that can be fixed (if > anyone really cares about the platform any more). Maybe that's > orthogonal to what you're doing here, I just don't like seeing > "mmiowb" spread. > > This series works for spin locks, but you would want a driver to > be able to use wmb() to order locks vs mmio when using a bit lock > or a mutex or whatever else. Without wading into the rest of the discussion, this does raise an interesting point, ie. what about eg. rwlock's? They're basically equivalent to spinlocks, and so could reasonably be expected to have the same behaviour. But we don't check the io_sync flag in arch_read/write_unlock() etc. and both of those use lwsync. Seems like we just forgot they existed? Commit f007cacffc88 ("[POWERPC] Fix MMIO ops to provide expected barrier behaviour") that added the io_sync stuff doesn't mention them at all. Am I missing anything? AFAICS read/write locks were never built on top of spin locks, so seems like we're just hoping drivers using rwlock do the right barriers? cheers