On Tue, Sep 03, 2024 at 11:49:28AM -0600, Jens Axboe wrote: > The elephant in the room here is why an 80M completion takes 100 msec? > That seems... insane. > > That aside, doing writes that big isn't great for latencies in general, > even if they are orders of magnitude smaller (as they should be). Maybe > this is solvable by just limiting the write size here. > > But it really seems out of line for a write that size to take 100 msec > to process. pagecache state processing is quite inefficient, we had to limit the number of them for XFS to avoid latency problems a while ago. Note that moving to folios means you can process a lot more data with the same number of completion iterations as well. I'd suggest the submitter looks into that for whatever file system they are using.