On Wed, May 12, 2021 at 03:46:11PM +0200, Jan Kara wrote: > Currently, serializing operations such as page fault, read, or readahead > against hole punching is rather difficult. The basic race scheme is > like: > > fallocate(FALLOC_FL_PUNCH_HOLE) read / fault / .. > truncate_inode_pages_range() > <create pages in page > cache here> > <update fs block mapping and free blocks> > > Now the problem is in this way read / page fault / readahead can > instantiate pages in page cache with potentially stale data (if blocks > get quickly reused). Avoiding this race is not simple - page locks do > not work because we want to make sure there are *no* pages in given > range. inode->i_rwsem does not work because page fault happens under > mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes > the performance for mixed read-write workloads suffer. > > So create a new rw_semaphore in the address_space - invalidate_lock - > that protects adding of pages to page cache for page faults / reads / > readahead. Remind me (or, rather, add to the documentation) why we have to hold the invalidate_lock during the call to readpage / readahead, and we don't just hold it around the call to add_to_page_cache / add_to_page_cache_locked / add_to_page_cache_lru ? I appreciate that ->readpages is still going to suck, but we're down to just three implementations of ->readpages now (9p, cifs & nfs). Also, could I trouble you to run the comments through 'fmt' (or equivalent)? It's easier to read if you're not kissing right up on 80 columns. > +++ b/fs/inode.c > @@ -190,6 +190,9 @@ int inode_init_always(struct super_block *sb, struct inode *inode) > mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE); > mapping->private_data = NULL; > mapping->writeback_index = 0; > + init_rwsem(&mapping->invalidate_lock); > + lockdep_set_class(&mapping->invalidate_lock, > + &sb->s_type->invalidate_lock_key); Why not: __init_rwsem(&mapping->invalidate_lock, "mapping.invalidate_lock", &sb->s_type->invalidate_lock_key);