Re: [PATCH 2/2] xfs: kick extra large ioends to completion workqueue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 02, 2020 at 09:19:23AM -0700, Darrick J. Wong wrote:
> On Fri, Oct 02, 2020 at 11:33:57AM -0400, Brian Foster wrote:
> > We've had reports of soft lockup warnings in the iomap ioend
> > completion path due to very large bios and/or bio chains. Divert any
> > ioends with 256k or more pages to process to the workqueue so
> > completion occurs in non-atomic context and can reschedule to avoid
> > soft lockup warnings.
> > 
> > Signed-off-by: Brian Foster <bfoster@xxxxxxxxxx>
> > ---
> >  fs/xfs/xfs_aops.c | 10 +++++++++-
> >  1 file changed, 9 insertions(+), 1 deletion(-)
> > 
> > diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
> > index 3e061ea99922..84ee917014f1 100644
> > --- a/fs/xfs/xfs_aops.c
> > +++ b/fs/xfs/xfs_aops.c
> > @@ -30,6 +30,13 @@ XFS_WPC(struct iomap_writepage_ctx *ctx)
> >  	return container_of(ctx, struct xfs_writepage_ctx, ctx);
> >  }
> >  
> > +/*
> > + * Kick extra large ioends off to the workqueue. Completion will process a lot
> > + * of pages for a large bio or bio chain and a non-atomic context is required to
> > + * reschedule and avoid soft lockup warnings.
> > + */
> > +#define XFS_LARGE_IOEND	(262144 << PAGE_SHIFT)
> 
> Hm, shouldn't that 262144 have to be annoated with a 'ULL' so that a
> dumb compiler won't turn that into a u32 and shift that all the way to
> zero?
> 

Probably.. will fix.

> I still kind of wonder about the letting the limit hit 16G on power with
> 64k pages, but I guess the number of pages we have to whack is ... not
> that high?
> 

TBH, the limit is kind of picked out of a hat since we don't have any
real data on the point where the page count becomes generally too high.
I originally was capping the size of the ioend, so for that I figured
1GB on 4k pages was conservative enough to still allow fairly large
ioends without doing too much page processing. This patch doesn't cap
the I/O size, so I suppose it might be more reasonable to reduce the
threshold if we wanted to. I don't really have a strong preference
either way. Hm?

> I dunno, if you fire up a 64k-page system with fantastical IO
> capabilities, attach a realtime volume, fallocate a 32G file and then
> try to write to that, will it actually turn that into one gigantic IO?
> 

Not sure, but one report we had was an x86_64 box pushing a 10GB+ bio
chain... :P

Brian

> > +
> >  /*
> >   * Fast and loose check if this write could update the on-disk inode size.
> >   */
> > @@ -239,7 +246,8 @@ static inline bool xfs_ioend_needs_workqueue(struct iomap_ioend *ioend)
> >  {
> >  	return ioend->io_private ||
> >  		ioend->io_type == IOMAP_UNWRITTEN ||
> > -		(ioend->io_flags & IOMAP_F_SHARED);
> > +		(ioend->io_flags & IOMAP_F_SHARED) ||
> > +		(ioend->io_size >= XFS_LARGE_IOEND);
> >  }
> >  
> >  STATIC void
> > -- 
> > 2.25.4
> > 
> 




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux