On Thu, Jul 29, 2021 at 04:49:35PM +0300, Konstantin Komarov wrote: > +static void ntfs_readahead(struct readahead_control *rac) > +{ > + struct address_space *mapping = rac->mapping; > + struct inode *inode = mapping->host; > + struct ntfs_inode *ni = ntfs_i(inode); > + u64 valid; > + loff_t pos; > + > + if (is_resident(ni)) { > + /* no readahead for resident */ > + return; > + } > + > + if (is_compressed(ni)) { > + /* no readahead for compressed */ > + return; I'm sure this works, but it could perform better ;-) The ->readpage path that you fall back to is synchronous (unless I misunderstand something). We now have readahead_expand() which lets you ensure that there are pages in the page cache for the entire range of the compressed extent. So if you can create an asynchronous decompression path (probably need your own custom bi_end_io), you can start reads here and only unlock the pages after you've done the decompression. This should not gate your code being accepted upstream. What I'd really like to see is your buffered i/o path being converted from buffer_heads to iomap, and iomap to gain the ability to handle compressed extents. > + valid = ni->i_valid; > + pos = readahead_pos(rac); > + > + if (valid < i_size_read(inode) && pos <= valid && > + valid < pos + readahead_length(rac)) { > + /* range cross 'valid'. read it page by page */ > + return; This, I do not understand. Why can't ntfs_get_block / mpage_readahead cope with a readahead that crosses i_valid? AIUI, i_valid is basically the same as i_size, and mpage_readahead() handles crossing i_size just fine.