On 21/09/2019 02.05, Linus Torvalds wrote:
On Fri, Sep 20, 2019 at 12:35 AM Konstantin Khlebnikov
<khlebnikov@xxxxxxxxxxxxxx> wrote:
This patch implements write-behind policy which tracks sequential writes
and starts background writeback when file have enough dirty pages.
Apart from a spelling error ("contigious"), my only reaction is that
I've wanted this for the multi-file writes, not just for single big
files.
Yes, single big files may be a simpler and perhaps the "10% effort for
90% of the gain", and thus the right thing to do, but I do wonder if
you've looked at simply extending it to cover multiple files when
people copy a whole directory (or unpack a tar-file, or similar).
Now, I hear you say "those are so small these days that it doesn't
matter". And maybe you're right. But partiocularly for slow media,
triggering good streaming write behavior has been a problem in the
past.
So I'm wondering whether the "writebehind" state should perhaps be
considered be a process state, rather than "struct file" state, and
also start triggering for writing smaller files.
It's simple to extend existing state with per-task counter of sequential
writes to detect patterns like unpacking tarball with small files.
After reaching some threshold write-behind could flush files in at close.
But in this case it's hard to wait previous writes to limit amount of
requests and pages in writeback for each stream.
Theoretically we could build chain of inodes for delaying and batching.
Maybe this was already discussed and people decided that the big-file
case was so much easier that it wasn't worth worrying about
writebehind for multiple files.
Linus