Jens Axboe <axboe@xxxxxxxxx> writes: > On 8/31/20 10:56 AM, Matthew Wilcox wrote: >> On Mon, Aug 31, 2020 at 10:39:26AM -0600, Jens Axboe wrote: >>> We really should ensure that ->io_pages is always set, imho, instead of >>> having to work-around it in other spots. >> >> Interestingly, there are only three places in the entire kernel which >> _use_ bdi->io_pages. FAT, Verity and the pagecache readahead code. >> >> Verity: >> unsigned long num_ra_pages = >> min_t(unsigned long, num_blocks_to_hash - i, >> inode->i_sb->s_bdi->io_pages); >> >> FAT: >> if (ra_pages > sb->s_bdi->io_pages) >> ra_pages = rounddown(ra_pages, sb->s_bdi->io_pages); >> >> Pagecache: >> max_pages = max_t(unsigned long, bdi->io_pages, ra->ra_pages); >> and >> if (req_size > max_pages && bdi->io_pages > max_pages) >> max_pages = min(req_size, bdi->io_pages); >> >> The funny thing is that all three are using it differently. Verity is >> taking io_pages to be the maximum amount to readahead. FAT is using >> it as the unit of readahead (round down to the previous multiple) and >> the pagecache uses it to limit reads that exceed the current per-file >> readahead limit (but allows per-file readahead to exceed io_pages, >> in which case it has no effect). >> >> So how should it be used? My inclination is to say that the pagecache >> is right, by virtue of being the most-used. > > When I added ->io_pages, it was for the page cache use case. The others > grew after that... FAT and pagecache usage would be similar or same purpose. The both is using io_pages as optimal IO size. In pagecache case, it uses io_pages if one request size is exceeding io_pages. In FAT case, there is perfect knowledge about future/total request size. So FAT divides request by io_pages, and adjust ra_pages with knowledge. I don't know about verity. Thanks. -- OGAWA Hirofumi <hirofumi@xxxxxxxxxxxxxxxxxx>