On Fri, Jun 21, 2024 at 07:48:08AM +0200, Christoph Hellwig wrote: > On Thu, Jun 20, 2024 at 12:51:42PM -0700, Darrick J. Wong wrote: > > > Further with no backoff we don't need to gather huge delwri lists to > > > mitigate the impact of backoffs, so we can submit IO more frequently > > > and reduce the time log items spend in flushing state by breaking > > > out of the item push loop once we've gathered enough IO to batch > > > submission effectively. > > > > Is that what the new count > 1000 branch does? > > That's my interpreation anyway. I'll let Dave chime in if he disagrees. <nod> I'll await a response on this... > > > > > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> > > > --- > > > fs/xfs/xfs_inode.c | 1 + > > > fs/xfs/xfs_inode_item.c | 6 +++++- > > > > Does it make sense to do this for buffer or dquot items too? > > Not having written this here is my 2 unqualified cents: > > For dquots it looks like it could be easily ported over, but I guess no > one has been bothering with dquot performance work for a while as it's > also missing a bunch of other things we did to the inode. But given that > according to Dave's commit log the іnode cluster flushing is a big part > of this dquots probably aren't as affected anyway as we flush them > individually (and there generally are a lot fewer dquot items in the AIL > anyway). It probably helps that dquot "clusters" are also single fsblocks too. > For buf items the buffers are queued up on the on-stack delwri list > and written when we flush them. So we won't ever find already > flushing items. Oh right, because only the AIL flushes logged buffers to disk. --D