On Mon, Mar 29, 2021 at 01:28:26PM +1100, Dave Chinner wrote: > On Fri, Mar 26, 2021 at 10:32:44AM -0700, Darrick J. Wong wrote: > > On Fri, Mar 26, 2021 at 11:39:38AM -0400, Brian Foster wrote: > > > Hi all, > > > > > > We have a report of a workload that deadlocks on log reservation via > > > iomap_ioend completion batching. To start, the fs format is somewhat > > > unique in that the log is on the smaller side (35MB) and the log stripe > > > unit is 256k, but this is actually a default mkfs for the underlying > > > storage. I don't have much more information wrt to the workload or > > > anything that contributes to the completion processing characteristics. > > > > > > The overall scenario is that a workqueue task is executing in > > > xfs_end_io() and blocked on transaction reservation for an unwritten > > > extent conversion. Since this task began executing and pulled pending > > > items from ->i_ioend_list, the latter was repopulated with 90 ioends, 67 > > > of which have append transactions. These append transactions account for > > > ~520k of log reservation each due to the log stripe unit. All together > > > this consumes nearly all of available log space, prevents allocation of > > > the aforementioned unwritten extent conversion transaction and thus > > > leaves the fs in a deadlocked state. > > > > > > I can think of different ways we could probably optimize this problem > > > away. One example is to transfer the append transaction to the inode at > > > bio completion time such that we retain only one per pending batch of > > > ioends. The workqueue task would then pull this append transaction from > > > the inode along with the ioend list and transfer it back to the last > > > non-unwritten/shared ioend in the sorted list. > > > > > > That said, I'm not totally convinced this addresses the fundamental > > > problem of acquiring transaction reservation from a context that > > > essentially already owns outstanding reservation vs. just making it hard > > > to reproduce. I'm wondering if/why we need the append transaction at > > > all. AFAICT it goes back to commit 281627df3eb5 ("xfs: log file size > > > updates at I/O completion time") in v3.4 which changed the completion > > > on-disk size update from being an unlogged update. If we continue to > > > send these potential append ioends to the workqueue for completion > > > processing, is there any reason we can't let the workqueue allocate the > > > transaction as it already does for unwritten conversion? > > > > Frankly I've never understood what benefit we get from preallocating a > > transaction and letting it twist in the wind consuming log space while > > writeback pushes data to the disk. It's perfectly fine to delay ioend > > processing while we wait for unwritten conversions and cow remapping to > > take effect, so what's the harm in a slight delay for this? > > The difference was that file size updates used to be far, far more > common than unwritten extent updates for buffered IO. When this code > was written, we almost never did buffered writes into unwritten > regions, but we always do sequential writes that required a file > size update. > > Given that this code was replacing an un-synchronised size update, > the performance impact of reserving transaction space in IO > completion was significant. There was also the problem of XFS using > global workqueues - the series that introduced the append > transaction also introduced per-mount IO completion workqueues and > so there were concerns about blowing out the number of completion > workers when we have thousands of pending completions all waiting on > log space. > > There was a bunch of considerations that probably don't exist > anymore, plus a bunch of new ones, such as the fact that we now > queue and merge ioends to process in a single context rather than > just spraying ioends to worker threads to deal with. The old code > would have worked just fine - the unwritten extent conversion would > not have been blocking all those other IO completions... > > Wait, hold on - we're putting both unwritten conversion and size > updates onto the same queue? > > Ah, yes, that's exactly what we are doing. We've punted all the size > extension to the unwritten workqueue, regardless of whether it's > just a size update or not. And then the work processes them one at a > time after sorting them. We don't complete/free the append > transactions until we try to merge them one at a time as we walk the > ioend list on the inode. Hence we try to reserve more log space > while we still hold an unknown amount of log space on the queue we > are processin... > > IOws, the completion queuing mixing unwritten extent conversion and > size updates is *nesting transactions*. THat's the deadlock - we > can't take a new transaction reservation while holding another > transaction reservation in a state where it cannot make progress.... > Yes. > > What happens if you replace the call to xfs_setfilesize_ioend in > > xfs_end_ioend with xfs_setfilesize, and skip the transaction > > preallocation altogether? > > I expect that the deadlock will go away at the expense of increased > log reservation contention in IO completion because this unbounds > the amount of transaction reservation concurrency that can occur in > buffered writeback. IOWs, this might just hammer the log really, > really hard and that's exactly what we don't want data IO completion > to do.... > The point of the original idea was to try and eliminate most of the transaction overhead for di_size updates in the first place by doing an update per batch (as opposed to per ioend). See the appended diff for a crude hacked up example of what I mean. > I'd say the first thing to fix is the ordering problem on the > completion queue. XFS needs more than just offset based ordering, it > needs ioend type based ordering, too. All the size updates should be > processed before the unwritten extent conversions, hence removing > the nesting of transactions.... > Generally speaking this makes sense if we need to retain the preallocated size update transactions for some reason. One thing to note is that we'd be putting on-disk size updates ahead of other ioend completions on background writeback. I suspect that might not matter much for unwritten ioends since we'd just expose unwritten extents after a crash, but the effect on cow writeback completion is not as clear to me. For one, it looks like a cow ioend can both require a transaction allocation for fork remap as well as have an append transaction already attached, so we'd probably have to tweak how individual ioends are processed as opposed to just ordering them differently. I also thought cow blocks don't necessarily have to cover shared (or even existing) blocks in the data fork due to preallocation, so we'd probably need to investigate things like whether this makes it possible to put an on-disk update ahead of a cow remap that lands somewhere in the range between the in-core inode size and the (smaller) on-disk inode size, and if so, whether that could result in problematic behavior. I'm not sure this is worth the associated complexity if there's opportunity to remove the need for most of these transactions in the first place. Hm? Brian --- 8< --- Hacked up RFC to demonstrate batched completion side transaction allocations for on-disk updates. --- diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 1cc7c36d98e9..04d200e5e70d 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -18,6 +18,9 @@ #include "xfs_bmap_util.h" #include "xfs_reflink.h" +/* XXX */ +#define IOMAP_F_APPEND 0x2000 + struct xfs_writepage_ctx { struct iomap_writepage_ctx ctx; unsigned int data_seq; @@ -182,12 +185,10 @@ xfs_end_ioend( error = xfs_reflink_end_cow(ip, offset, size); else if (ioend->io_type == IOMAP_UNWRITTEN) error = xfs_iomap_write_unwritten(ip, offset, size, false); - else - ASSERT(!xfs_ioend_is_append(ioend) || ioend->io_private); done: - if (ioend->io_private) - error = xfs_setfilesize_ioend(ioend, error); + if (ioend->io_flags & IOMAP_F_APPEND) + error = xfs_setfilesize(ip, offset, size); iomap_finish_ioends(ioend, error); memalloc_nofs_restore(nofs_flag); } @@ -221,16 +222,28 @@ xfs_end_io( struct iomap_ioend *ioend; struct list_head tmp; unsigned long flags; + xfs_off_t maxendoff; spin_lock_irqsave(&ip->i_ioend_lock, flags); list_replace_init(&ip->i_ioend_list, &tmp); spin_unlock_irqrestore(&ip->i_ioend_lock, flags); iomap_sort_ioends(&tmp); + + /* XXX: track max endoff manually? */ + ioend = list_last_entry(&tmp, struct iomap_ioend, io_list); + if (((ioend->io_flags & IOMAP_F_SHARED) || + (ioend->io_type != IOMAP_UNWRITTEN)) && + xfs_ioend_is_append(ioend)) { + ioend->io_flags |= IOMAP_F_APPEND; + maxendoff = ioend->io_offset + ioend->io_size; + } + while ((ioend = list_first_entry_or_null(&tmp, struct iomap_ioend, io_list))) { list_del_init(&ioend->io_list); iomap_ioend_try_merge(ioend, &tmp, xfs_ioend_merge_private); + ASSERT(ioend->io_offset + ioend->io_size <= maxendoff); xfs_end_ioend(ioend); } } @@ -250,8 +263,6 @@ xfs_end_bio( struct xfs_inode *ip = XFS_I(ioend->io_inode); unsigned long flags; - ASSERT(xfs_ioend_needs_workqueue(ioend)); - spin_lock_irqsave(&ip->i_ioend_lock, flags); if (list_empty(&ip->i_ioend_list)) WARN_ON_ONCE(!queue_work(ip->i_mount->m_unwritten_workqueue, @@ -487,6 +498,7 @@ xfs_prepare_ioend( int status) { unsigned int nofs_flag; + bool append = false; /* * We can allocate memory here while doing writeback on behalf of @@ -501,17 +513,17 @@ xfs_prepare_ioend( ioend->io_offset, ioend->io_size); } - /* Reserve log space if we might write beyond the on-disk inode size. */ + /* XXX: quick hack to queue append ioends w/o transaction */ if (!status && ((ioend->io_flags & IOMAP_F_SHARED) || ioend->io_type != IOMAP_UNWRITTEN) && xfs_ioend_is_append(ioend) && !ioend->io_private) - status = xfs_setfilesize_trans_alloc(ioend); + append = true; memalloc_nofs_restore(nofs_flag); - if (xfs_ioend_needs_workqueue(ioend)) + if (xfs_ioend_needs_workqueue(ioend) || append) ioend->io_bio->bi_end_io = xfs_end_bio; return status; }