Hi,I'm recently converting btrfs (for now only metadata) to go larger (order > 0) folios, and so far it looks pretty good (can already pass local fstests runs).
But to achieve that btrfs metadata is not utilizing __filemap_get_folio() at all, but manually allocating larger folios, then attaching them to the filemap.
This is due to 2 factors: - We want fixed folio size - We have our own fallback path The whole idea is go large or go bust, either we got a large folio matching our request exactly, or we fall back to the old per-page allocation (the cross-page handling infrastructure is here anyway).AFAIK the main block is the folio allocation loop for larger folios in __filemap_get_folio(). It would go NORETRY and NOWARN, and after failure reduce the order and retry until order is 0.
But if a filesystem really wants to support multi-page block size (eg, 4K page size with 16K block size), the developers may really want to push all the work to folio layer, without handling all the cross-page hassle.
I'm wondering if it's possible (well, my naive brain would say it's pretty easy) to add something like FGP_FIXED_ORDER?
And what can be the possible pitfalls?(Too many ENOMEM failure even withou NORETRY? Or just no real world users so far?)
Thanks, Qu
Attachment:
OpenPGP_0xC23D91F3A125FEA8.asc
Description: OpenPGP public key
Attachment:
OpenPGP_signature.asc
Description: OpenPGP digital signature