On Thu, Jan 06, 2022 at 06:36:52PM +0000, Trond Myklebust wrote: > On Thu, 2022-01-06 at 09:48 +1100, Dave Chinner wrote: > > On Wed, Jan 05, 2022 at 08:45:05PM +0000, Trond Myklebust wrote: > > > On Tue, 2022-01-04 at 21:09 -0500, Trond Myklebust wrote: > > > > On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote: > > > > > On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust > > > > > wrote: > > > > > > We have different reproducers. The common feature appears to > > > > > > be > > > > > > the > > > > > > need for a decently fast box with fairly large memory (128GB > > > > > > in > > > > > > one > > > > > > case, 400GB in the other). It has been reproduced with HDs, > > > > > > SSDs > > > > > > and > > > > > > NVME systems. > > > > > > > > > > > > On the 128GB box, we had it set up with 10+ disks in a JBOD > > > > > > configuration and were running the AJA system tests. > > > > > > > > > > > > On the 400GB box, we were just serially creating large (> > > > > > > 6GB) > > > > > > files > > > > > > using fio and that was occasionally triggering the issue. > > > > > > However > > > > > > doing > > > > > > an strace of that workload to disk reproduced the problem > > > > > > faster > > > > > > :- > > > > > > ). > > > > > > > > > > Ok, that matches up with the "lots of logically sequential > > > > > dirty > > > > > data on a single inode in cache" vector that is required to > > > > > create > > > > > really long bio chains on individual ioends. > > > > > > > > > > Can you try the patch below and see if addresses the issue? > > > > > > > > > > > > > That patch does seem to fix the soft lockups. > > > > > > > > > > Oops... Strike that, apparently our tests just hit the following > > > when > > > running on AWS with that patch. > > > > OK, so there are also large contiguous physical extents being > > allocated in some cases here. > > > > > So it was harder to hit, but we still did eventually. > > > > Yup, that's what I wanted to know - it indicates that both the > > filesystem completion processing and the iomap page processing play > > a role in the CPU usage. More complex patch for you to try below... > > > > Cheers, > > > > Dave. > > Hi Dave, > > This patch got further than the previous one. However it too failed on > the same AWS setup after we started creating larger (in this case 52GB) > files. The previous patch failed at 15GB. Ok, so that indicates that the page cache pages are being allocated at write() time from physically contiguous pages so that we are ending up with a large number of bvec merges in the bio layeri during writeback. i.e. we're building multipage bvecs in the bios and so the segment count per bvec is low (maybe one per bio, instead of ~256 if the pages are not physically contiguous). I'd hoped that wasn't going to be an issue because, unless memory is largely empty and the workload is completely single threaded, you can't get continuous gigabyte scale runs of contiguous pages in the page cache for sequential writes. Hence I figured the segment limits would trigger long before we get into the "millions of pages to complete" needed to trigger the soft lockup. Ok, I'll ignore bio segments and the upcoming multi-page folio stuff that will largely result in 1:1 bio segment:folio ratios and just count pages instead... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx