Re: [PATCH] iomap: Fix the write_count in iomap_add_to_ioend().

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 31, 2020 at 12:01:07PM +0800, Ming Lei wrote:
> On Tue, Aug 25, 2020 at 10:49:17AM -0400, Brian Foster wrote:
> > cc Ming
> > 
> > On Tue, Aug 25, 2020 at 10:42:03AM +1000, Dave Chinner wrote:
> > > On Mon, Aug 24, 2020 at 11:48:41AM -0400, Brian Foster wrote:
> > > > On Mon, Aug 24, 2020 at 04:04:17PM +0100, Christoph Hellwig wrote:
> > > > > On Mon, Aug 24, 2020 at 10:28:23AM -0400, Brian Foster wrote:
> > > > > > Do I understand the current code (__bio_try_merge_page() ->
> > > > > > page_is_mergeable()) correctly in that we're checking for physical page
> > > > > > contiguity and not necessarily requiring a new bio_vec per physical
> > > > > > page?
> > > > > 
> > > > > 
> > > > > Yes.
> > > > > 
> > > > 
> > > > Ok. I also realize now that this occurs on a kernel without commit
> > > > 07173c3ec276 ("block: enable multipage bvecs"). That is probably a
> > > > contributing factor, but it's not clear to me whether it's feasible to
> > > > backport whatever supporting infrastructure is required for that
> > > > mechanism to work (I suspect not).
> > > > 
> > > > > > With regard to Dave's earlier point around seeing excessively sized bio
> > > > > > chains.. If I set up a large memory box with high dirty mem ratios and
> > > > > > do contiguous buffered overwrites over a 32GB range followed by fsync, I
> > > > > > can see upwards of 1GB per bio and thus chains on the order of 32+ bios
> > > > > > for the entire write. If I play games with how the buffered overwrite is
> > > > > > submitted (i.e., in reverse) however, then I can occasionally reproduce
> > > > > > a ~32GB chain of ~32k bios, which I think is what leads to problems in
> > > > > > I/O completion on some systems. Granted, I don't reproduce soft lockup
> > > > > > issues on my system with that behavior, so perhaps there's more to that
> > > > > > particular issue.
> > > > > > 
> > > > > > Regardless, it seems reasonable to me to at least have a conservative
> > > > > > limit on the length of an ioend bio chain. Would anybody object to
> > > > > > iomap_ioend growing a chain counter and perhaps forcing into a new ioend
> > > > > > if we chain something like more than 1k bios at once?
> > > > > 
> > > > > So what exactly is the problem of processing a long chain in the
> > > > > workqueue vs multiple small chains?  Maybe we need a cond_resched()
> > > > > here and there, but I don't see how we'd substantially change behavior.
> > > > > 
> > > > 
> > > > The immediate problem is a watchdog lockup detection in bio completion:
> > > > 
> > > >   NMI watchdog: Watchdog detected hard LOCKUP on cpu 25
> > > > 
> > > > This effectively lands at the following segment of iomap_finish_ioend():
> > > > 
> > > > 		...
> > > >                /* walk each page on bio, ending page IO on them */
> > > >                 bio_for_each_segment_all(bv, bio, iter_all)
> > > >                         iomap_finish_page_writeback(inode, bv->bv_page, error);
> > > > 
> > > > I suppose we could add a cond_resched(), but is that safe directly
> > > > inside of a ->bi_end_io() handler? Another option could be to dump large
> > > > chains into the completion workqueue, but we may still need to track the
> > > > length to do that. Thoughts?
> > > 
> > > We have ioend completion merging that will run the compeltion once
> > > for all the pending ioend completions on that inode. IOWs, we do not
> > > need to build huge chains at submission time to batch up completions
> > > efficiently. However, huge bio chains at submission time do cause
> > > issues with writeback fairness, pinning GBs of ram as unreclaimable
> > > for seconds because they are queued for completion while we are
> > > still submitting the bio chain and submission is being throttled by
> > > the block layer writeback throttle, etc. Not to mention the latency
> > > of stable pages in a situation like this - a mmap() write fault
> > > could stall for many seconds waiting for a huge bio chain to finish
> > > submission and run completion processing even when the IO for the
> > > given page we faulted on was completed before the page fault
> > > occurred...
> > > 
> > > Hence I think we really do need to cap the length of the bio
> > > chains here so that we start completing and ending page writeback on
> > > large writeback ranges long before the writeback code finishes
> > > submitting the range it was asked to write back.
> > > 
> > 
> > Ming pointed out separately that limiting the bio chain itself might not
> > be enough because with multipage bvecs, we can effectively capture the
> > same number of pages in much fewer bios. Given that, what do you think
> > about something like the patch below to limit ioend size? This
> > effectively limits the number of pages per ioend regardless of whether
> > in-core state results in a small chain of dense bios or a large chain of
> > smaller bios, without requiring any new explicit page count tracking.
> 
> Hello Brian,
> 
> This patch looks fine.
> 
> However, I am wondering why iomap has to chain bios in one ioend, and why not
> submit each bio in usual way just like what fs/direct-io.c does? Then each bio
> can complete the pages in its own .bi_end_io().
> 

I think it's mainly for efficiency and code simplicity reasons. The
ioend describes a contiguous range of blocks with the same io type
(written, unwritten, append, etc.), so whatever post-completion action
might be required for a particular ioend (i.e. unwritten conversion)
shouldn't execute until I/O completes on the entire range. I believe
this goes back to XFS commit 0e51a8e191db ("xfs: optimize bio handling
in the buffer writeback path"), which basically reimplemented similar,
custom ioend behavior to rely on bio chains and was eventually lifted
from XFS into iomap.

Brian

> 
> thanks,
> Ming
> 




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux