On Sat, Feb 08, 2020 at 11:34:40AM -0800, ira.weiny@xxxxxxxxx wrote: > From: Ira Weiny <ira.weiny@xxxxxxxxx> > > DAX requires special address space operations but many other functions > check the IS_DAX() state. > > While DAX is a property of the inode we perfer a lock at the super block > level because of the overhead of a rwsem within the inode. > > Define a vfs per superblock percpu rs semaphore to lock the DAX state ???? > while performing various VFS layer operations. Write lock calls are > provided here but are used in subsequent patches by the file systems > themselves. > > Signed-off-by: Ira Weiny <ira.weiny@xxxxxxxxx> > > --- > Changes from V2 > > Rebase on linux-next-08-02-2020 > > Fix locking order > Change all references from mode to state where appropriate > add CONFIG_FS_DAX requirement for state change > Use a static branch to enable locking only when a dax capable > device has been seen. > > Move the lock to a global vfs lock > > this does a few things > 1) preps us better for ext4 support > 2) removes funky callbacks from inode ops > 3) remove complexity from XFS and probably from > ext4 later > > We can do this because > 1) the locking order is required to be at the > highest level anyway, so why complicate xfs > 2) We had to move the sem to the super_block > because it is too heavy for the inode. > 3) After internal discussions with Dan we > decided that this would be easier, just as > performant, and with slightly less overhead > than in the VFS SB. > > We also change the functions names to up/down; > read/write as appropriate. Previous names were over > simplified. This, IMO, is a bit of a train wreck. This patch has nothing to do with "DAX state", it's about serialising access to the aops vector. There should be zero references to DAX in this patch at all, except maybe to say "switching DAX on dynamically requires atomic switching of address space ops". Big problems I see here: 1. static key to turn it on globally. - just a gross hack around doing it properly with a per-sb mechanism and enbaling it only on filesystems that are on DAX capable block devices. - you're already passing in an inode to all these functions. It's trivial to do: if (!inode->i_sb->s_flags & S_DYNAMIC_AOPS) return /* do sb->s_aops_lock manipulation */ 2. global lock - OMG! - global lock will cause entire system IO/page fault stalls when someone does recursive/bulk DAX flag modification operations. Per-cpu rwsem Contention on large systems will be utterly awful. - ext4's use case almost never hits the exclusive lock side of the percpu-rwsem - only when changing the journal mode flag on the inode. And it only affects writeback in progress, so it's not going to have massive concurrency on it like a VFS level global lock has. -> Bad model to follow. - per-sb lock is trivial - see #1 - which limits scope to single filesystem - per-inode rwsem would make this problem go away entirely. 3. If we can use a global per-cpu rwsem, why can't we just use a per-inode rwsem? - locking context rules are the same - rwsem scales pretty damn well for shared ops - no "global" contention problems - small enough that we can put another rwsem in the inode. 4. "inode_dax_state_up_read" - Eye bleeds. - this is about the aops structure serialisation, not dax. - The name makes no sense in the places that it has been added. 5. We already have code that does almost exactly what we need: the superblock freezing infrastructure. - freezing implements a "hold operations on this superblock until further notice" model we can easily follow. - sb_start_write/sb_end_write provides a simple API model and a clean, clear and concise naming convention we can use, too. Really, I'm struggling to understand how we got to "global locking that stops the world" from "need to change per-inode state atomically". Can someone please explain to me why this patch isn't just a simple set of largely self-explanitory functions like this: XX_start_aop() XX_end_aop() XX_lock_aops() XX_switch_aops(aops) XX_unlock_aops() where "XX" is "sb" or "inode" depending on what locking granularity is used... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx