On Fri, Dec 3, 2021 at 6:58 PM Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > On Fri, Dec 3, 2021 at 7:29 AM Andreas Gruenbacher <agruenba@xxxxxxxxxx> wrote: > > We're trying pretty hard to handle large I/O requests efficiently at > > the filesystem level. A small, static upper limit in the fault-in > > functions has the potential to ruin those efforts. So I'm not a fan of > > that. > > I don't think fault-in should happen under any sane normal circumstances. > > Except for low-memory situations, and then you don't want to fault in > large areas. > > Do you really expect to write big areas that the user has never even > touched? That would be literally insane. > > And if the user _has_ touched them, then they'll in in-core. Except > for the "swapped out" case. > > End result: this is purely a correctness issue, not a performance issue. It happens when you mmap a file and write the mmapped region to another file, for example. I don't think we want to make filesystems go bonkers in such scenarios. Scaling down in response to memory pressure sounds perfectly fine though. Thanks, Andreas