On Thu, 2022-01-06 at 09:48 +1100, Dave Chinner wrote: > On Wed, Jan 05, 2022 at 08:45:05PM +0000, Trond Myklebust wrote: > > On Tue, 2022-01-04 at 21:09 -0500, Trond Myklebust wrote: > > > On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote: > > > > On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust > > > > wrote: > > > > > We have different reproducers. The common feature appears to > > > > > be > > > > > the > > > > > need for a decently fast box with fairly large memory (128GB > > > > > in > > > > > one > > > > > case, 400GB in the other). It has been reproduced with HDs, > > > > > SSDs > > > > > and > > > > > NVME systems. > > > > > > > > > > On the 128GB box, we had it set up with 10+ disks in a JBOD > > > > > configuration and were running the AJA system tests. > > > > > > > > > > On the 400GB box, we were just serially creating large (> > > > > > 6GB) > > > > > files > > > > > using fio and that was occasionally triggering the issue. > > > > > However > > > > > doing > > > > > an strace of that workload to disk reproduced the problem > > > > > faster > > > > > :- > > > > > ). > > > > > > > > Ok, that matches up with the "lots of logically sequential > > > > dirty > > > > data on a single inode in cache" vector that is required to > > > > create > > > > really long bio chains on individual ioends. > > > > > > > > Can you try the patch below and see if addresses the issue? > > > > > > > > > > That patch does seem to fix the soft lockups. > > > > > > > Oops... Strike that, apparently our tests just hit the following > > when > > running on AWS with that patch. > > OK, so there are also large contiguous physical extents being > allocated in some cases here. > > > So it was harder to hit, but we still did eventually. > > Yup, that's what I wanted to know - it indicates that both the > filesystem completion processing and the iomap page processing play > a role in the CPU usage. More complex patch for you to try below... > > Cheers, > > Dave. Thanks! Building... -- Trond Myklebust Linux NFS client maintainer, Hammerspace trond.myklebust@xxxxxxxxxxxxxxx