Re: [PATCH 6/7] mm/filemap: allocate folios with mapping blocksize

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/19/23 10:08, Pankaj Raghav wrote:
Hi Hannes,
On Wed, Jun 14, 2023 at 01:46:36PM +0200, Hannes Reinecke wrote:
The mapping has an underlying blocksize (by virtue of
mapping->host->i_blkbits), so if the mapping blocksize
is larger than the pagesize we should allocate folios
in the correct order.

Network filesystems such as 9pfs set the blkbits to be maximum data it
wants to transfer leading to unnecessary memory pressure as we will try
to allocate higher order folios(Order 5 in my setup). Isn't it better
for each filesystem to request the minimum folio order it needs for its
page cache early on? Block devices can do the same for its block cache.

I have prototype along those lines and I will it soon. This is also
something willy indicated before in a mailing list conversation.

Well; I _though_ that's why we had things like optimal I/O size and
maximal I/O size. But this seem to be relegated to request queue limits,
so I guess it's not available from 'struct block_device' or 'struct gendisk'.

So I've been thinking of adding a flag somewhere (possibly in
'struct address_space') to indicate that blkbits is a hard limit
and not just an advisory thing.

But indeed, I've seen this with NFS, too, which insists on setting blkbits to something like 8.

Cheers,

Hannes




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux