Re: [RFC v2 05/11] fs/mpage: use blocks_per_folio instead of blocks_per_page

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Dec 13, 2024 at 07:10:43PM -0800, Luis Chamberlain wrote:

Something is wrong here ... I'm just not sure what (nor am I sure why
testing hasn't caught it).

> -	const unsigned blocks_per_page = PAGE_SIZE >> blkbits;
> +	const unsigned blocks_per_folio = folio_size(folio) >> blkbits;

OK, fine ...

> -	last_block = block_in_file + args->nr_pages * blocks_per_page;
> +	last_block = block_in_file + args->nr_pages * blocks_per_folio;

So we've scaled up the right hand side of this, that's good.

> @@ -385,7 +385,7 @@ int mpage_read_folio(struct folio *folio, get_block_t get_block)
>  {
>  	struct mpage_readpage_args args = {
>  		.folio = folio,
> -		.nr_pages = 1,
> +		.nr_pages = folio_nr_pages(folio),

Oh, but we've also scaled up the left hand side.  So instead of being,
say, 4 times larger, it's now 16 times larger.

Now, where do we use it?

                if (block_in_file < last_block) {
                        map_bh->b_size = (last_block-block_in_file) << blkbits;
                        if (args->get_block(inode, block_in_file, map_bh, 0))
                                goto confused;

so we're going to ask the filesystem for 4x as much data as we actually
need.  Guess it'll depend on the filesystem whether that's a bad thing
or not.

I guess I think the right solution here is to keep working in terms of
pages.  readahead_count() returns the number of pages, not the number of
folios (remember we can have a mix of different size folios in the same
readahead_batch).

But I can be argued with.




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux