On 3/21/23 16:00, Matthew Wilcox wrote:
On Tue, Mar 07, 2023 at 09:14:27AM +0100, Hannes Reinecke wrote:
On 3/7/23 08:30, Matthew Wilcox wrote:
On Tue, Mar 07, 2023 at 07:55:32AM +0100, Hannes Reinecke wrote:
On 3/6/23 18:37, Matthew Wilcox wrote:
On Mon, Mar 06, 2023 at 01:01:23PM +0100, Hannes Reinecke wrote:
- page = alloc_page(gfp | __GFP_ZERO | __GFP_HIGHMEM);
- if (!page)
+ folio = folio_alloc(gfp | __GFP_ZERO, 0);
+ if (!folio)
Did you drop HIGHMEM support on purpose?
No; I thought that folios would be doing that implicitely.
Will be re-adding.
We can't ... not all filesystems want to allocate every folio from
HIGHMEM. eg for superblocks, it often makes more sense to allocate the
folio from lowmem than allocate it from highmem and keep it kmapped.
The only GFP flag that folios force-set is __GFP_COMP because folios by
definition are compound pages.
Oh well.
However, when playing with the modified brd and setting the logical&physical
blocksize to 16k the whole thing crashes
(not unexpectedly).
It does crash, however, in block_read_full_folio(), which rather
surprisingly (at least for me) is using create_page_buffers();
I would have expected something like create_folio_buffers().
Is this work in progress or what is the plan here?
Supporting folios > PAGE_SIZE in blockdev is definitely still WIP.
I know of at least one bug, which is:
#define bh_offset(bh) ((unsigned long)(bh)->b_data & ~PAGE_MASK)
That needs to be something like
static size_t bh_offset(const struct buffer_head *bh)
{
return (unsigned long)bh->b_data & (folio_size(bh->b_folio) - 1);
}
I haven't done a thorough scan for folio-size problems in the block
layer; I've just been fixing up things as I notice them.
And not to mention the various instances of (PAGE_SHIFT - blkbits)
which will get happily into negative numbers for large block sizes,
resulting in interesting effects for a shift left operation ...
Yes, create_page_buffers() should now be create_folio_buffers(). Just
didn't get round to it yet.
Ah, so I was on the right track after all.
But I was wondering ... couldn't we use the physical block size as a
hint on how many pages to allocate?
With that we can work on updating the stack even with existing hardware,
and we wouldn't crash immediately if we miss the odd conversion ...
Hmm?
Cheers,
Hannes