Re: [PATCH] iomap: Address soft lockup in iomap_finish_ioend()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2022-01-06 at 15:07 -0500, Brian Foster wrote:
> On Thu, Jan 06, 2022 at 06:36:52PM +0000, Trond Myklebust wrote:
> > On Thu, 2022-01-06 at 09:48 +1100, Dave Chinner wrote:
> > > On Wed, Jan 05, 2022 at 08:45:05PM +0000, Trond Myklebust wrote:
> > > > On Tue, 2022-01-04 at 21:09 -0500, Trond Myklebust wrote:
> > > > > On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote:
> > > > > > On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust
> > > > > > wrote:
> > > > > > > We have different reproducers. The common feature appears
> > > > > > > to
> > > > > > > be
> > > > > > > the
> > > > > > > need for a decently fast box with fairly large memory
> > > > > > > (128GB
> > > > > > > in
> > > > > > > one
> > > > > > > case, 400GB in the other). It has been reproduced with
> > > > > > > HDs,
> > > > > > > SSDs
> > > > > > > and
> > > > > > > NVME systems.
> > > > > > > 
> > > > > > > On the 128GB box, we had it set up with 10+ disks in a
> > > > > > > JBOD
> > > > > > > configuration and were running the AJA system tests.
> > > > > > > 
> > > > > > > On the 400GB box, we were just serially creating large (>
> > > > > > > 6GB)
> > > > > > > files
> > > > > > > using fio and that was occasionally triggering the issue.
> > > > > > > However
> > > > > > > doing
> > > > > > > an strace of that workload to disk reproduced the problem
> > > > > > > faster
> > > > > > > :-
> > > > > > > ).
> > > > > > 
> > > > > > Ok, that matches up with the "lots of logically sequential
> > > > > > dirty
> > > > > > data on a single inode in cache" vector that is required to
> > > > > > create
> > > > > > really long bio chains on individual ioends.
> > > > > > 
> > > > > > Can you try the patch below and see if addresses the issue?
> > > > > > 
> > > > > 
> > > > > That patch does seem to fix the soft lockups.
> > > > > 
> > > > 
> > > > Oops... Strike that, apparently our tests just hit the
> > > > following
> > > > when
> > > > running on AWS with that patch.
> > > 
> > > OK, so there are also large contiguous physical extents being
> > > allocated in some cases here.
> > > 
> > > > So it was harder to hit, but we still did eventually.
> > > 
> > > Yup, that's what I wanted to know - it indicates that both the
> > > filesystem completion processing and the iomap page processing
> > > play
> > > a role in the CPU usage. More complex patch for you to try
> > > below...
> > > 
> > > Cheers,
> > > 
> > > Dave.
> > 
> > Hi Dave,
> > 
> > This patch got further than the previous one. However it too failed
> > on
> > the same AWS setup after we started creating larger (in this case
> > 52GB)
> > files. The previous patch failed at 15GB.
> > 
> 
> Care to try my old series [1] that attempted to address this,
> assuming
> it still applies to your kernel? You should only need patches 1 and
> 2.
> You can toss in patch 3 if you'd like, but as Dave's earlier patch
> has
> shown, this can just make it harder to reproduce.
> 
> I don't know if this will go anywhere as is, but I was never able to
> get
> any sort of confirmation from the previous reporter to understand at
> least whether it is effective. I agree with Jens' earlier concern
> that
> the per-page yields are probably overkill, but if it were otherwise
> effective it shouldn't be that hard to add filtering. Patch 3 could
> also
> technically be used in place of patch 1 if we really wanted to go
> that
> route, but I wouldn't take that step until there was some
> verification
> that the yielding heuristic is effective.
> 
> Brian
> 
> [1]
> https://lore.kernel.org/linux-xfs/20210517171722.1266878-1-bfoster@xxxxxxxxxx/
> 
> 
> 

Hi Brian,

I would expect those to work, since the first patch is essentially
identical to the one I wrote and tested before trying Dave's first
patch version (at least for the special case of XFS). However we never
did test that patch on the AWS setup, so let me try your patches 1 & 2
and see if they get us further than 52GB.

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@xxxxxxxxxxxxxxx






[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux