> > > > Hi Jan, Dave, > > > > > > > > Trying to circle back to this after 3 years! > > > > Seeing that there is no progress with range locks and > > > > that the mixed rw workloads performance issue still very much exists. > > > > > > > > Is the situation now different than 3 years ago with invalidate_lock? > > > > > > Yes, I've implemented invalidate_lock exactly to fix the issues you've > > > pointed out without regressing the mixed rw workloads (because > > > invalidate_lock is taken in shared mode only for reads and usually not at > > > all for writes). > > > > > > > Would my approach of pre-warm page cache before taking IOLOCK > > > > be safe if page cache is pre-warmed with invalidate_lock held? > > > > > > Why would it be needed? But yes, with invalidate_lock you could presumably > > > make that idea safe... > > > > To remind you, the context in which I pointed you to the punch hole race > > issue in "other file systems" was a discussion about trying to relax the > > "atomic write" POSIX semantics [1] of xfs. > > Ah, I see. Sorry, I already forgot :-| Understandable. It has been 3 years ;-) > > > There was a lot of discussions around range locks and changing the > > fairness of rwsem readers and writer, but none of this changes the fact > > that as long as the lock is file wide (and it does not look like that is > > going to change in the near future), it is better for lock contention to > > perform the serialization on page cache read/write and not on disk > > read/write. > > > > Therefore, *if* it is acceptable to pre-warn page cache for buffered read > > under invalidate_lock, that is a simple way to bring the xfs performance with > > random rw mix workload on par with ext4 performance without losing the > > atomic write POSIX semantics. So everyone can be happy? > > So to spell out your proposal so that we are on the same page: you want to > use invalidate_lock + page locks to achieve "writes are atomic wrt reads" > property XFS currently has without holding i_rwsem in shared mode during > reads. Am I getting it correct? Not exactly. > > How exactly do you imagine the synchronization of buffered read against > buffered write would work? Lock all pages for the read range in the page > cache? You'd need to be careful to not bring the machine OOM when someone > asks to read a huge range... I imagine that the atomic r/w synchronisation will remain *exactly* as it is today by taking XFS_IOLOCK_SHARED around generic_file_read_iter(), when reading data into user buffer, but before that, I would like to issue and wait for read of the pages in the range to reduce the probability of doing the read I/O under XFS_IOLOCK_SHARED. The pre-warm of page cache does not need to abide to the atomic read semantics and it is also tolerable if some pages are evicted in between pre-warn and read to user buffer - in the worst case this will result in I/O amplification, but for the common case, it will be a big win for the mixed random r/w performance on xfs. To reduce risk of page cache thrashing we can limit this optimization to a maximum number of page cache pre-warm. The questions are: 1. Does this plan sound reasonable? 2. Is there a ready helper (force_page_cache_readahead?) that I can use which takes the required page/invalidate locks? Thanks, Amir.