Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > On Tue, Jan 25, 2022 at 01:57:44PM +0000, David Howells wrote: > > + while (readahead_count(ractl) - ractl->_batch_count) { > > You do understand that prefixing a structure member with an '_' means > "Don't use this", right? If I could get the compiler to prevent you, I > would. Yes, I know. However, as previously discussed, I think that your implementation of readahead batching doesn't work right - hence the need to apply compensation to the values returned by the accessor functions. Btw, I end up doing this: for (i = 0; i < nr_pages; i++) if (!readahead_folio(ractl)) BUG(); in patch 5. I want to create a batch, but I don't want to be given the array of addresses of the folios as I'm going to use an xarray-class iterator. Further, _batch_count at this point is some value related to just the last folio and not the batch as a whole:-/ (Also, the above won't work if any folios retrieved are larger than a page) Note that cifs_readahead() is removed in patch 7 and readahead functionality is offloaded to netfslib, so I'm not sure it's worth spending much time on fixing. [I should mention that netfs_readahead() also does: while (readahead_folio(ractl)) ; which could probably be replaced with something better that doesn't keep taking and dropping the RCU readlock.] Would you object if I added a function like: static inline unsigned int readahead_commit_batch(struct readahead_control *rac) { BUG_ON(rac->_batch_count > rac->_nr_pages); rac->_nr_pages -= rac->_batch_count; rac->_index += rac->_batch_count; rac->_batch_count = 0; } It could then be called from both __readahead_folio() and __readahead_batch(). For __readahead_folio(), the duplicate setting of _batch_count should be optimised away on the path where a folio is returned. I could then call this from the loop in cifs before going round again. I'd also like to consider adding something like: static inline void readahead_set_batch(struct readahead_control *rac) { unsigned int i = 0; struct page *page; XA_STATE(xas, &rac->mapping->i_pages, 0); BUG_ON(rac->_batch_count > rac->_nr_pages); rac->_nr_pages -= rac->_batch_count; rac->_index += rac->_batch_count; rac->_batch_count = 0; xas_set(&xas, rac->_index); rcu_read_lock(); xas_for_each(&xas, page, rac->_index + rac->_nr_pages - 1) { if (xas_retry(&xas, page)) continue; VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageTail(page), page); rac->_batch_count += thp_nr_pages(page); } rcu_read_unlock(); } so that netfslib can use it to load all the pages it is given into a batch without retrieving the page pointers. David -- Linux-cachefs mailing list Linux-cachefs@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/linux-cachefs