On Fri, 29 Nov 2024 at 20:25, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > > Swap the positions of lock and count to support CMPXCHG_LOCKREF if > SPINLOCK_SIZE > 4 when enable LOCK_STAT. The reference count can always > be used regardless of the spinlock_t size. NAK. This is not sane: > + struct { > + int count; > + spinlock_t lock; > + } __packed; That spinlock_t might not be usable in a 'packed' format. For all we know, some architecture might have alignment restrictions for spinlock_t, and who the hell knows where the actual lock is anyway. In fact, I pretty much *guarantee* that is the case. That 'spinlock_t' will have a 'struct lockdep_map' in it that has pointers etc, and some architectures will not deal with unaligned accesses, and this will simply NOT WORK. So no, this needs to die. I would suggest (a) either just admit that LOCKREF doesn't work with spinlock statistics or debugging (which is the current situation) (b) lockref cases would be changed to not use 'spinlock_t' at all, but simply use 'struct raw_spinlock'. but this kind of "let's play games with unaligned lockdep maps etc" is not something that I will ever accept. Note that (b) would be a *big* change. Because lockref users often use the spinlock part quite independently, and now all of those would need to be changed to use raw locks. Just as an example, see this: $ git grep 'spin_.*->d_lock' | wc -l 340 and while the dentry cache is probably the main user of lockref, it's not the only one. So those 340 dentry uses are probably the majority, but I bet there's another hundred or so non-dentry lockref spinlock users in erofs and gfs2. Andrew - please make sure that patch gets removed from all your queues. Linus