Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > Note that PAGE_SIZE varies across arches and folios are going to > > exacerbate this. What I don't want to happen is that you read from a > > file, it creates, say, a 4M (or larger) folio; you change three bytes and > > then you're forced to write back the entire 4M folio. > > Actually, you do. Two situations: > > 1. Application uses MADVISE_HUGEPAGE. In response, we create a 2MB > page and mmap it aligned. We use a PMD sized TLB entry and then the > CPU dirties a few bytes with a store. There's no sub-TLB-entry tracking > of dirtiness. It's just the whole 2MB. That's a special case. The app specifically asked for it. I'll grant with mmap you have to mark a whole page as being dirty - but if you mmapped it, you need to understand that's what will happen. > 2. The bigger the folio, the more writes it will absorb before being > written back. So when you're writing back that 4MB folio, you're not > just servicing this 3 byte write, you're servicing every other write > which hit this 4MB chunk of the file. You can argue it that way - but we already do it bytewise in some filesystems, so what you want would necessitate a change of behaviour. Note also that if the page size > max RPC payload size (1MB in NFS, I think), you have to make multiple write operations to fulfil that writeback; further, if you have an object-based system you might be making writes to multiple servers, some of which will not actually make a change, to make that writeback. I wonder if this needs pushing onto the various network filesystem mailing lists to find out what they want and why. David