On Tue, Feb 27, 2024 at 06:08:57AM -0800, Luis Chamberlain wrote: > On Tue, Feb 27, 2024 at 05:07:30AM -0500, Kent Overstreet wrote: > > On Fri, Feb 23, 2024 at 03:59:58PM -0800, Luis Chamberlain wrote: > > > Part of the testing we have done with LBS was to do some performance > > > tests on XFS to ensure things are not regressing. Building linux is a > > > fine decent test and we did some random cloud instance tests on that and > > > presented that at Plumbers, but it doesn't really cut it if we want to > > > push things to the limit though. What are the limits to buffered IO > > > and how do we test that? Who keeps track of it? > > > > > > The obvious recurring tension is that for really high performance folks > > > just recommend to use birect IO. But if you are stress testing changes > > > to a filesystem and want to push buffered IO to its limits it makes > > > sense to stick to buffered IO, otherwise how else do we test it? > > > > > > It is good to know limits to buffered IO too because some workloads > > > cannot use direct IO. For instance PostgreSQL doesn't have direct IO > > > support and even as late as the end of last year we learned that adding > > > direct IO to PostgreSQL would be difficult. Chris Mason has noted also > > > that direct IO can also force writes during reads (?)... Anyway, testing > > > the limits of buffered IO limits to ensure you are not creating > > > regressions when doing some page cache surgery seems like it might be > > > useful and a sensible thing to do .... The good news is we have not found > > > regressions with LBS but all the testing seems to beg the question, of what > > > are the limits of buffered IO anyway, and how does it scale? Do we know, do > > > we care? Do we keep track of it? How does it compare to direct IO for some > > > workloads? How big is the delta? How do we best test that? How do we > > > automate all that? Do we want to automatically test this to avoid regressions? > > > > > > The obvious issues with some workloads for buffered IO is having a > > > possible penality if you are not really re-using folios added to the > > > page cache. Jens Axboe reported a while ago issues with workloads with > > > random reads over a data set 10x the size of RAM and also proposed > > > RWF_UNCACHED as a way to help [0]. As Chinner put it, this seemed more > > > like direct IO with kernel pages and a memcpy(), and it requires > > > further serialization to be implemented that we already do for > > > direct IO for writes. There at least seems to be agreement that if we're > > > going to provide an enhancement or alternative that we should strive to not > > > make the same mistakes we've done with direct IO. The rationale for some > > > workloads to use buffered IO is it helps reduce some tail latencies, so > > > that's something to live up to. > > > > > > On that same thread Christoph also mentioned the possibility of a direct > > > IO variant which can leverage the cache. Is that something we want to > > > move forward with? > > > > > > Chris Mason also listed a few other desirables if we do: > > > > > > - Allowing concurrent writes (xfs DIO does this now) > > > > AFAIK every filesystem allows concurrent direct writes, not just xfs, > > it's _buffered_ writes that we care about here. > > The context above was a possible direct IO variant, that's why direct IO > was mentioned and that XFS at least had support. > > > I just pushed a patch to my CI for buffered writes without taking the > > inode lock - for bcachefs. It'll be straightforward, but a decent amount > > of work, to lift this to the VFS, if people are interested in > > collaborating. > > > > https://evilpiepirate.org/git/bcachefs.git/log/?h=bcachefs-buffered-write-locking > > Neat, this is sort of what I wanted to get a sense for, if this sort of > topic was worth discussing at LSFMM. > > > The approach is: for non extending, non appending writes, see if we can > > pin the entire range of the pagecache we're writing to; fall back to > > taking the inode lock if we can't. > > Perhaps a silly thought... but initial reaction is, would it make sense > for the page cache to make this easier for us, so we have this be > easier? It is not clear to me but my first reaction to seeing some of > these deltas was what if we had something like the space split up, as we > do with XFS agcounts, and so each group deals with its own ranges. I > considered this before profiling, and as with Matthew I figured it might > be lock contenton. It very likely is not for my test case, and as Linus > and Dave has clarified we are both penalized and also have a > singlthreaded writeback. If we had a group split we'd have locks per > group and perhaps a writeback a dedicated thread per group. Wtf are you talking about?